# Neon > The document "Ship faster with AI tools" outlines how Neon users can leverage AI agents and tools to accelerate development processes and improve efficiency within the Neon database environment. --- # Source: https://neon.com/llms/ai-ai-agents-tools.txt # AI tools for Agents > The document "Ship faster with AI tools" outlines how Neon users can leverage AI agents and tools to accelerate development processes and improve efficiency within the Neon database environment. ## Source - [AI tools for Agents HTML](https://neon.com/docs/ai/ai-agents-tools): The original HTML version of this documentation Neon provides several ways to integrate with AI tools and agents, from natural language database control to autonomous agent frameworks. Choose the tools that fit your workflow. ## MCP integration The Model Context Protocol (MCP) is a standardized way for AI tools to interact with Neon databases using natural language, providing secure and contextual access to your data and infrastructure. - [Neon MCP Server](https://neon.com/docs/ai/neon-mcp-server): Learn about managing your Neon projects using natural language with Neon MCP Server - [Connect MCP clients](https://neon.com/docs/ai/connect-mcp-clients-to-neon): Learn how to connect MCP clients like Cursor, Claude Code, and ChatGPT to your Neon database ## Claude Code plugin If you're using Claude Code, install the Neon plugin to get Skills, MCP integration, and all the context rules in one package. - [Claude Code plugin for Neon](https://neon.com/docs/ai/ai-claude-code-plugin): Includes Claude Code Skills for Neon, Neon MCP integration, and context rules ## AI rules For other AI tools like Cursor, use these individual `.mdc` context rule files. Copy them to your AI tool's custom rules directory — the format is tool-agnostic and works with any AI assistant that supports context rules. - [Neon Auth](https://neon.com/docs/ai/ai-rules-neon-auth): AI rules for implementing authentication with Neon - [Neon Drizzle](https://neon.com/docs/ai/ai-rules-neon-drizzle): AI rules for using Drizzle ORM with Neon - [Neon Serverless Driver](https://neon.com/docs/ai/ai-rules-neon-serverless): AI rules for serverless database connections - [Neon TypeScript SDK](https://neon.com/docs/ai/ai-rules-neon-typescript-sdk): AI rules for using the Neon TypeScript SDK - [Neon Python SDK](https://neon.com/docs/ai/ai-rules-neon-python-sdk): AI rules for using the Neon Python SDK - [Neon API](https://neon.com/docs/ai/ai-rules-neon-api): AI rules for using the Neon API - [Neon Toolkit](https://neon.com/docs/ai/ai-rules-neon-toolkit): AI rules for using the Neon Toolkit ## Build AI agents Create autonomous agents that can manage and interact with your Neon databases programmatically. Build with our terse JavaScript client or the Neon API. - [Neon for AI agent platforms](https://neon.com/use-cases/ai-agents): Read about Neon as a solution for agents that need backends. - [@neondatabase/toolkit](https://github.com/neondatabase/toolkit): A terse JavaScript client for spinning up Postgres databases and running SQL queries - [Database versioning](https://neon.com/docs/ai/ai-database-versioning): How AI agents and codegen platforms use Neon snapshot APIs for database versioning - [Neon API](https://neon.com/docs/reference/api-reference): Integrate using the Neon API ## Agent frameworks Build AI agents using popular frameworks that integrate with Neon. - [AgentStack Integration](https://neon.com/guides/agentstack-neon): Build and deploy AI agents with AgentStack's CLI and Neon integration - [AutoGen Integration](https://neon.com/guides/autogen-neon): Create collaborative AI agents with Microsoft AutoGen and Neon - [Azure AI Agent Service](https://neon.com/guides/azure-ai-agent-service): Build enterprise AI agents with Azure AI Agent Service and Neon - [Composio + CrewAI](https://neon.com/guides/composio-crewai-neon): Create multi-agent systems with CrewAI and Neon - [LangGraph Integration](https://neon.com/guides/langgraph-neon): Build stateful, multi-actor applications with LangGraph and Neon --- # Source: https://neon.com/llms/ai-ai-app-build.txt # app.build > The "app.build" documentation outlines the process for building AI applications using Neon's platform, detailing steps for setting up the environment, integrating AI models, and deploying applications efficiently. ## Source - [app.build HTML](https://neon.com/docs/ai/ai-app-build): The original HTML version of this documentation [app.build](https://www.app.build/) is our exploration of what AI agents can do with a complete backend stack. We built it after working with partners like Replit and other agent-driven platforms, learning what it takes to automate not just code generation, but the entire development workflow. This open-source project creates and deploys full-stack applications from scratch. It handles everything: database provisioning, authentication, testing, CI/CD, and deployment. The agent breaks down app creation into discrete tasks, validates each piece, and assembles them into working applications. Think of it as a blueprint you can use, fork, or extend to build your own agent infrastructure. ## Why app.build - **Transparency**: Open-source codebase lets you see exactly how the agent makes decisions and generates code - **Extensibility**: Add your own templates, models, or deployment targets - **Learning**: Understand agent architectures by examining a working implementation - **Best practices built-in**: Every app includes testing, CI/CD, and proper project structure - **Reference architecture**: Use as a starting point for your own agent infrastructure - **Community-driven**: Contribute improvements that benefit everyone using the platform ## Getting started Simply go to https://www.app.build/ and authenticate, start chatting ti build! **Note:** the CLI is now deprecated. `npx @app.build/cli` will give you an error. ## What it generates - Backend: Fastify server with Drizzle ORM - Frontend: React application built with Vite - Database: Postgres instance (Neon by default) - Authentication: An auth integration (Neon Auth by default) - Tests: Playwright end-to-end tests - CI/CD: GitHub Actions configuration ## Infrastructure Generated applications use (by default): - Neon for Postgres database and authentication - Koyeb for hosting - GitHub for code repository and CI/CD All infrastructure choices can be modified when running locally. ## Architecture The agent works by: - Writing and running end-to-end tests as part of the generation pipeline - Using a well-tested base template with technologies the agent deeply understands - Breaking work into small, independent tasks that can be solved reliably - Running quality checks on every piece of generated code These patterns emerged from working with production agent platforms where reliability and validation are critical. The modular design means you can trace exactly what the agent is doing at each step, making it straightforward to debug issues or add new capabilities. ## Extending app.build As a blueprint for agent infrastructure, app.build is designed to be forked and modified: - **Custom templates**: Replace the default web app template with your own - **Alternative models**: Use local models via Ollama, LMStudio, or OpenRouter, or swap cloud providers (Anthropic, OpenAI, Gemini) - **Different providers**: Change database, hosting, or auth providers - **New validations**: Add your own code quality checks - **Modified workflows**: Adjust the generation pipeline to your needs ## Local development Everything can run locally with your choice of LLM provider. app.build also supports local models through Ollama, LMStudio, and OpenRouter, in addition to cloud providers. ### Local Model Configuration Configure local models using environment variables. Create a `.env.local` file in your project directory: ```bash # For Ollama (requires Ollama running locally) OLLAMA_HOST=http://localhost:11434 PREFER_OLLAMA=1 LLM_BEST_CODING_MODEL=ollama:llama3.3:latest LLM_UNIVERSAL_MODEL=ollama:llama3.3:latest LLM_ULTRA_FAST_MODEL=ollama:phi4:latest # For LMStudio (requires LMStudio running locally) LLM_BEST_CODING_MODEL=lmstudio:http://localhost:1234 LLM_UNIVERSAL_MODEL=lmstudio:http://localhost:1234 # For OpenRouter (requires API key) OPENROUTER_API_KEY=your_openrouter_api_key_here LLM_BEST_CODING_MODEL=openrouter:deepseek/deepseek-coder LLM_UNIVERSAL_MODEL=openrouter:anthropic/claude-3.5-sonnet # Cloud providers (original options) # ANTHROPIC_API_KEY=your_anthropic_key_here # GEMINI_API_KEY=your_gemini_key_here ``` ### Model Categories app.build uses different model categories for different tasks: - **LLM_BEST_CODING_MODEL**: High-quality models for complex code generation (slower but better results) - **LLM_UNIVERSAL_MODEL**: Medium-speed models for general tasks and FSM operations - **LLM_ULTRA_FAST_MODEL**: Fast models for simple tasks like commit messages - **LLM_VISION_MODEL**: Models with vision capabilities for UI analysis ### Provider Setup **Ollama**: Install and run Ollama locally, then pull your desired models: ```bash ollama pull llama3.3:latest ollama pull phi4:latest ``` **LMStudio**: Download and run LMStudio with a local model server on port 1234. **OpenRouter**: Sign up at [OpenRouter](https://openrouter.ai/) and get an API key for access to various models. ### Local Development Features - Use any LLM provider or self-hosted models - Skip deployment for local-only development - Modify templates without restrictions - Debug the agent's decision-making process Setup instructions are in the app.build source repositories, with guides for local CLI, custom models, and agent setup in development. ## Current limitations As a reference implementation, we've made specific choices to keep the codebase clear and extensible: - Single template for web applications with a fixed tech stack - Limited customization options in managed mode - CLI is basic - create and iterate functionality only - Sparse documentation ## Contributing - Repositories: - [github.com/appdotbuild/agent](https://github.com/appdotbuild/agent) (agent logic and generation) - [github.com/appdotbuild/platform](https://github.com/appdotbuild/platform) (backend infrastructure) - Issues: Bug reports, feature requests, and discussions - PRs: Code contributions, documentation, templates The project welcomes contributions at all levels, from fixing typos to exploring new generation strategies. ## Latest information For the most up-to-date information and announcements, visit [app.build](https://app.build/). Our [blog](https://app.build/blog/) features technical deep-dives into the agent architecture, code generation strategies, and community contributions. --- # Source: https://neon.com/llms/ai-ai-azure-notebooks.txt # Azure Data Studio Notebooks > The document outlines how to use Azure Data Studio Notebooks with Neon, detailing steps for connecting to a Neon database and executing SQL queries within the notebook environment. ## Source - [Azure Data Studio Notebooks HTML](https://neon.com/docs/ai/ai-azure-notebooks): The original HTML version of this documentation A Jupyter Notebook is an open-source web application that allows you to create and share documents containing live code, equations, visualizations, and narrative text. Azure Data Studio supports Jupyter Notebooks, enabling users to combine SQL queries, Python code, and markdown text in a single interactive document. This guide describes how to create a new python notebook in Azure Data Studio, connect to a Neon database, install the `pgvector` extension to enable Neon as a vector store, and run a vector search query. ## Prerequisites To perform the steps in this guide, you will require: - Azure Data Studio - Download the latest version of Azure Data Studio for your operating system [here](https://learn.microsoft.com/en-us/azure-data-studio/download-azure-data-studio). - A Neon account - If you do not have one, sign up at [Neon](https://console.neon.tech/signup). Your Neon project comes with a ready-to-use Postgres database named `neondb`. You can use it, or create your own by following the instructions [here](https://neon.com/docs/manage/databases#create-a-database). ## Retrieve your Neon database connection string Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal, and select a branch, a user, and the database you want to connect to. A connection string is constructed for you. ## Create a notebook 1. Go to the **File** menu for Azure Data Studio and select **New Notebook**. 2. Select **Python 3** for the Kernel and set **Attach to** to "localhost" where it can access your Python installation. You can save the notebook using the **Save** or **Save as...** command from the **File** menu. ## Configure Python for Notebooks The first time you connect to the Python kernel in a notebook, the **Configure Python for Notebooks** page is displayed. You can select either: - **New Python installation** to install a new copy of Python for Azure Data Studio, or - **Use existing Python installation** to specify the path to an existing Python installation for Azure Data Studio to use To view the location and version of the active Python kernel, you can create a code cell and run the following Python commands: ```python import os import sys print(sys.version_info) print(os.path.dirname(sys.executable)) ``` ## Running a code cell You can create cells containing Python code that you can run in place by clicking the **Run cell** button (the round blue arrow) to the left of the cell. The results are shown in the notebook after the cell finishes running. In the `pgvector` example that follows, you'll add and execute several code cells. ## pgvector example After you've set up Azure Data Studio and have created a notebook, you can use the following basic example to get started with Neon and `pgvector`. ### Install the psycopg driver psycopg is a popular Postgres database adapter for the Python programming language. It allows Python applications to connect to and interact with Postgres databases. Install the `psycopg` adapter by adding and executing the following code cell: ```python !pip install psycopg ``` ### Connect to your database 1. In your notebook, create a code block to define your Neon database connection and create a cursor object. Replace `postgresql://[user]:[password]@[neon_hostname]/[dbname]` with the database connection string you retrieved previously. ```python import os import psycopg # Provide your Neon connection string connection_string = "postgresql://[user]:[password]@[neon_hostname]/[dbname]" # Connect using the connection string connection = psycopg.connect(connection_string) # Create a new cursor object cursor = connection.cursor() ``` 2. Execute the code block. 3. Add a code block for testing the database connection. ```python # Execute this query to test the database connection cursor.execute("SELECT 1;") result = cursor.fetchone() # Check the query result if result == (1,): print("Your database connection was successful!") else: print("Your connection failed.") ``` 4. Execute the code block. ### Install the pgvector extension 1. Create a codeblock to install the `pgvector` extension to enable your Neon database as a vector store: ```python # Execute this query to install the pgvector extension cursor.execute("CREATE EXTENSION IF NOT EXISTS vector;") ``` 2. Execute the code block. ### Create a table and add vector data 1. Add a code block to create a table and insert data: ```python create_table_sql = ''' CREATE TABLE items ( id BIGSERIAL PRIMARY KEY, embedding VECTOR(3) ); ''' # Insert data insert_data_sql = ''' INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]'), ('[7,8,9]'); ''' # Execute the SQL statements cursor.execute(create_table_sql) cursor.execute(insert_data_sql) # Commit the changes connection.commit() ``` 2. Execute the code block. ### Query your data 1. Add a codeblock to perform a vector similarity search. ```python cursor.execute("SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 1;") all_data = cursor.fetchall() print(all_data) ``` 2. Execute the code block. ### Next steps For more information about using Neon with `pgvector`, see [The pgvector extension](https://neon.com/docs/extensions/pgvector). --- # Source: https://neon.com/llms/ai-ai-claude-code-plugin.txt # Claude Code plugin for Neon > The "Claude Code plugin for Neon" documentation details the integration of the Claude AI code assistant with Neon, facilitating automated code generation and optimization within the Neon environment. ## Source - [Claude Code plugin for Neon HTML](https://neon.com/docs/ai/ai-claude-code-plugin): The original HTML version of this documentation The **Neon Claude Code plugin** adds Neon-specific Skills and API access to Claude Code, Anthropic's AI development environment. It's part of the [Neon AI Rules toolkit](https://github.com/neondatabase-labs/ai-rules), and it bundles four guided Skills plus an MCP (Model Context Protocol) server integration. ## Overview Claude Skills are Markdown-based workflows that tell Claude how to complete specific tasks — like setting up a database connection, editing a file, or running a script. The Neon plugin packages several of these Skills into a reusable bundle, so Claude Code can interact directly with Neon Postgres. Once installed, the plugin gives Claude the ability to: - Create and manage Neon projects and databases - Connect frameworks like Drizzle ORM - Configure serverless Postgres connections - Reference Neon documentation and best practices in context ## What's included The plugin contains: - **4 Claude Skills** - **An MCP server integration** that connects Claude to Neon's APIs - **Portable context rules (.mdc files)** for other AI tools such as Cursor ### Included Skills | Skill | Description | | :--------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **neon-drizzle** | Guides Claude through setting up [Drizzle ORM](https://orm.drizzle.team) with Neon. Handles schema creation, connection setup, and project scaffolding. | | **neon-serverless** | Teaches Claude how to configure [Neon's serverless Postgres driver](https://neon.com/docs/serverless/serverless-driver) and test connections. | | **neon-toolkit** | Provides workflows for using the [Neon Management API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) to create databases, projects, and branches dynamically. | | **add-neon-knowledge** | Gives Claude access to [Neon documentation](https://neon.com/docs/introduction) snippets and usage examples — the "Neon brain." | ## How it works Each Skill is a Markdown file with a description and a step-by-step workflow. When you ask Claude to perform a task (for example, _"Integrate Neon with Drizzle"_), it checks the available Skill descriptions, finds a match, and loads the full instructions to complete the task. The plugin's MCP server integration lets Claude interact with Neon's live API endpoints. That means Claude can: - Query Neon for project information - Create or delete branches and databases - Validate connection strings - Run SQL queries and migrations ## Install the plugin in Claude Code 1. Add the Neon marketplace: ```bash /plugin marketplace add neondatabase-labs/ai-rules ``` 2. Install the Neon plugin: ```bash /plugin install neon-plugin@neon ``` 3. Verify the installation: Ask Claude Code: ```bash which skills do you have access to? ``` You should see the four Neon Skills listed. 4. Start using the Skills: Use natural language prompts like: > "Use the neon-drizzle Skill to set up Drizzle ORM with Neon." Claude will automatically select and execute the relevant workflow. ## Use the rules outside Claude Code The [Neon AI Rules toolkit repository](https://github.com/neondatabase-labs/ai-rules) also includes portable `.mdc` context rule files. You can use them in: - **Cursor:** copy the `.mdc` files into `.cursor/rules/` - **Other AI tools:** place them in your assistant's custom rules directory These files include best-practice prompts and code patterns for connecting to and developing with Neon Postgres. ## Repository structure ``` ai-rules/ ├── .claude-plugin/ │ └── marketplace.json # Marketplace metadata ├── neon-plugin/ # Claude Code plugin │ ├── .claude-plugin/ │ │ └── plugin.json # Plugin configuration │ ├── .mcp.json # MCP server connection │ └── skills/ # Guided skills │ ├── neon-drizzle/ # Drizzle ORM skill │ │ ├── SKILL.md │ │ ├── guides/ # Workflow guides │ │ ├── references/ # Technical docs │ │ ├── scripts/ # Automation │ │ └── templates/ # Code examples │ ├── neon-serverless/ # Serverless skill │ ├── neon-toolkit/ # Ephemeral DB skill │ └── add-neon-docs/ # Docs installer skill ├── *.mdc # Context rules (13 files) ├── LICENSE └── README.md ``` ## Learn more - [Neon AI Rules toolkit](https://github.com/neondatabase-labs/ai-rules) - [Claude Skills documentation](https://docs.anthropic.com/en/docs/agents/claude-code) - [AI Agents and Tools overview](https://neon.com/docs/ai/ai-agents-tools) If you run into issues, visit our [Discord](https://discord.gg/neondatabase) or open an issue in the [ai-rules repository](https://github.com/neondatabase-labs/ai-rules/issues). --- # Source: https://neon.com/llms/ai-ai-concepts.txt # AI Concepts > The "AI Concepts" document outlines foundational artificial intelligence principles relevant to Neon, detailing how these concepts integrate with Neon's database functionalities to enhance data processing and management. ## Source - [AI Concepts HTML](https://neon.com/docs/ai/ai-concepts): The original HTML version of this documentation Embeddings are an essential component in building AI applications. This topic describes embeddings and how they are used, generated, and stored in Postgres. ## What are embeddings? When working with unstructured data, a common objective is to transform it into a more structured format that is easier to analyze and retrieve. This transformation can be achieved through the use of 'embeddings', which are vectors containing an array of floating-point numbers that represent the features or dimensions of your data. For example, a sentence like "The cow jumped over the moon" might be represented by an embedding that looks like this: [0.5, 0.3, 0.1]. The advantage of embeddings is that they allow us to measure the similarity between different pieces of text. By calculating the distance between two embeddings, we can assess their relatedness - the smaller the distance, the greater the similarity, and vice versa. This quality is particularly useful as it enables embeddings to capture the underlying meaning of the text. Take the following three sentences, for example: - Sentence 1: "The cow jumped over the moon." - Sentence 2: "The bovine leaped above the celestial body." - Sentence 3: "I enjoy eating pancakes." You can determine the most similar sentences by following these steps: 1. Generate embeddings for each sentence. For illustrative purposes, assume these values represent actual embeddings: - Embedding for sentence 1 → [0.5, 0.3, 0.1] - Embedding for sentence 2 → [0.6, 0.29, 0.12] - Embedding for sentence 3 → [0.1, -0.2, 0.4] 2. Compute the distance between all pairs of embeddings (1 & 2, 2 & 3, and 1 & 3). 3. Identify the pair of embeddings with the shortest distance between them. When we apply this process, it is likely that sentences 1 and 2, both of which involve jumping cattle, will emerge as the most related according to a distance calculation. ## Vector similarity search Transforming data into embeddings and computing similarities between one or more items is referred to as vector search or similarity search. This process has a wide range of applications, including: - **Information retrieval:** By representing user queries as vectors, we can perform more accurate searches based on the meaning behind the queries, allowing us to retrieve more relevant information. - **Natural language processing:** Embeddings capture the essence of the text, making them excellent tools for tasks such as text classification and sentiment analysis. - **Recommendation systems:** Using vector similarity, we can recommend items similar to a given item, whether they be movies, products, books, or otherwise. This technique allows us to create more personalized and relevant recommendations. - **Anomaly detection:** By determining the similarity between items within a dataset, we can identify outliers or anomalies—items that don't quite fit the pattern. This can be crucial in many fields, from cybersecurity to quality control. ### Distance metrics Vector similarity search computes similarities (the distance) between data points. Calculating how far apart data points are helps us understand the relationship between them. Distance can be computed in different ways using different metrics. Some popular distance metrics include: - Euclidean (L2): Often referred to as the "ordinary" distance you'd measure with a ruler. - Manhattan (L1): Also known as "taxicab" or "city block" distance. - Cosine: This calculates the cosine of the angle between two vectors. Other distance metrics supported by the `pgvector` extension include [Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) and [Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index). Different distance metrics can be more appropriate for different tasks, depending on the nature of the data and the specific relationships you're interested in. For instance, cosine similarity is often used in text analysis. ## Generating embeddings A common approach to generating embeddings is to use an LLM API, such as [OpenAI's Embeddings API](https://platform.openai.com/docs/api-reference/embeddings). This API allows you to input a text string into an API endpoint, which then returns the corresponding embedding. The "cow jumped over the moon" is a simplistic example with 3 dimensions. Most embedding models generate embeddings with a much larger number of dimensions. OpenAI's newest and most performant embedding models, `text-embedding-3-small` and `text-embedding-3-large`, generate embeddings with 1536 and 3072 dimensions by default, respectively. Here's an example of how to use OpenAI's `text-embedding-3-small` model to generate an embedding: ```bash curl https://api.openai.com/v1/embeddings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "input": "Your text string goes here", "model": "text-embedding-3-small" }' ``` **Note**: Running the command above requires an OpenAI API key, which must be obtained from [OpenAI](https://platform.openai.com/). Upon successful execution, you'll receive a response similar to the following: ```json { "object": "list", "data": [ { "object": "embedding", "index": 0, "embedding": [ -0.006929283495992422, -0.005336422007530928, ... (omitted for spacing) -4.547132266452536e-05, -0.024047505110502243 ], } ], "model": "text-embedding-3-small", "usage": { "prompt_tokens": 5, "total_tokens": 5 } } ``` To learn more about OpenAI's embeddings, see [Embeddings](https://platform.openai.com/docs/guides/embeddings). Here, you'll find an example of obtaining embeddings from an [Amazon fine-food reviews](https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews) dataset supplied as a CSV file. See [Obtaining the embeddings](https://platform.openai.com/docs/guides/embeddings/use-cases). There are many embedding models you can use, such as those provided by Mistral AI, Cohere, Hugging Face, etc. AI tools like [LanngChain](https://www.langchain.com/) provide interfaces and integrations for working with a variety of models. See [LangChain: Text embedding models](https://js.langchain.com/v0.1/docs/integrations/text_embedding/). You'll also find a [Neon Postgres guide](https://js.langchain.com/v0.1/docs/integrations/vectorstores/neon/) on the LangChain site and [Class NeonPostgres](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_neon.NeonPostgres.html), which provides an interface for working with a Neon Postgres database. ## Storing vector embeddings in Postgres Neon supports the [pgvector](https://neon.com/docs/extensions/pgvector) Postgres extension, which enables the storage and retrieval of vector embeddings directly within your Postgres database. When building AI applications, installing this extension eliminates the need to extend your architecture to include a separate vector store. Installing the `pgvector` extension simply requires running the following `CREATE EXTENSION` statement from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client connected to your Neon Postgres database. ```sql CREATE EXTENSION vector; ``` After installing the `pgvector` extension, you can create a table to store your embeddings. For example, you might define a table similar to the following to store your embeddings: ```sql CREATE TABLE items(id BIGSERIAL PRIMARY KEY, embedding VECTOR(1536)); ``` To add embeddings to the table, you would insert the data as shown: ```sql INSERT INTO items(embedding) VALUES ('[ -0.006929283495992422, -0.005336422007530928, ... -4.547132266452536e-05, -0.024047505110502243 ]'); ``` For detailed information about using `pgvector`, refer to our guide: [The pgvector extension](https://neon.com/docs/extensions/pgvector). --- # Source: https://neon.com/llms/ai-ai-database-versioning.txt # Database versioning with snapshots > The document explains how Neon users can utilize database versioning with snapshots to manage and track changes in their databases effectively. ## Source - [Database versioning with snapshots HTML](https://neon.com/docs/ai/ai-database-versioning): The original HTML version of this documentation **Note** Beta: Snapshots are available in Beta. Please give us [Feedback](https://console.neon.tech/app/projects?modal=feedback) from the Neon Console or by connecting with us on [Discord](https://discord.gg/92vNTzKDGp). There is no charge for snapshots while the feature is in Beta. There is a limit of 1 snapshot on the Free plan and 10 on paid plans. The same limits apply to the Neon [Agent plan](https://neon.com/use-cases/ai-agents). If you need higher limits, please reach out to [Neon support](https://neon.com/docs/introduction/support). ## Overview This guide describes how you can implement database versioning for AI agent and code generation platforms using Neon's snapshot APIs. With snapshots, you can create point-in-time database versions, perform instant rollbacks, and maintain stable database connection strings for your applications. See a working implementation in the [demonstration repository](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo). > **Terminology note:** This guide uses "versions" to describe saved database states from the user's perspective, and "snapshots" when referring to Neon's technical implementation. You may also see these called "checkpoints" or "edits" in some AI agent contexts. **Tip** Synopsis: Use the project's root branch for production, whose database connection string stays the same when a snapshot restore is finalized. Create snapshots to save database versions. For rollbacks, restore snapshots with `finalize_restore: true` and `target_branch_id` set to your root branch ID, then poll operations until complete before connecting. For previews, use `finalize_restore: false` to create temporary branches with their own database connection strings. ## Why use snapshots for versioning Standard database branching is great for development but less suitable for versioning. Each new branch gets a new database connection string and creates dependency chains that complicate deletion. This pattern solves both problems. By restoring a Neon snapshot to your active branch with `finalize_restore: true`, you replace its data in-place while preserving the original, stable connection string. This makes the snapshot-restore pattern ideal for versioned environments where connection stability is needed. ## Quick start with the demo The best way to understand this pattern is to see it in action: 1. **Clone the snapshots demo app**: - https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo 2. **Key files to examine**: - [lib/neon/create-snapshot.ts](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/lib/neon/create-snapshot.ts) - Snapshot creation implementation - [lib/neon/apply-snapshot.ts](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/lib/neon/apply-snapshot.ts) - Complete restore workflow with operations polling - [lib/neon/operations.ts](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/lib/neon/operations.ts) - Operation status polling logic - [app/[checkpointId]/page.tsx](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/app/[checkpointId]/page.tsx) - UI integration showing versions and rollbacks 3. Run locally or use the [public demo](https://snapshots-as-checkpoints-demo.vercel.app/) to see version creation, rollbacks, and previews in action > **Note:** The demo repository uses "checkpoint" terminology which maps to "version" in this guide. > The demo implements a contacts application that evolves through agent prompts, demonstrating version creation and restoration at each stage: **v0: empty app** → **v1: basic contacts** → **v2: add role/company** → **v3: add tags** ## The active branch pattern Every agent project maps to one Neon project with a designated [root branch](https://neon.com/docs/reference/glossary#root-branch) that serves as the production database. **Important:** Snapshots can only be created from root branches in Neon. A root branch is a branch with no parent (typically named `main` or `production`). **The active branch:** - Gets its data replaced during finalized rollbacks - Maintains a consistent database connection string through Neon's restore mechanism — see [How restore works](https://neon.com/docs/ai/ai-database-versioning#how-restore-works) for details - Must be a root branch for snapshot creation **The snapshots:** - Capture point-in-time database versions - Store only incremental changes (cost-efficient) - Can be restored to the active branch or to a temporary preview branch ## Implementation ### Creating snapshots Create a snapshot to capture the current database version using the [snapshot endpoint](https://api-docs.neon.tech/reference/createsnapshot): ```bash POST /api/v2/projects/{project_id}/branches/{branch_id}/snapshot ``` > **Demo implementation:** See [lib/neon/create-snapshot.ts](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/lib/neon/create-snapshot.ts) for an example with error handling and operation polling. > **Path parameters:** - `project_id` (string, required): The Neon project ID - `branch_id` (string, required): The active branch ID (must be a root branch) **Query parameters:** - `lsn` (string): Target Log Sequence Number. Cannot be used with `timestamp` - `timestamp` (string): Target timestamp (RFC 3339). Cannot be used with `lsn` - `name` (string): Name for the snapshot - `expires_at` (string): Auto-deletion time (RFC 3339) **Example:** ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}/snapshot?name=version-session-1&expires_at=2025-08-13T00:00:00Z' \ --header 'authorization: Bearer $NEON_API_KEY' ``` **When to create snapshots:** - Start of each agent session - Before database schema changes - After successful operations - User-initiated save points ### Rolling back to (restoring) a snapshot Restore any snapshot to recover a previous version using the [restore endpoint](https://api-docs.neon.tech/reference/restoresnapshot): ```bash POST /api/v2/projects/{project_id}/snapshots/{snapshot_id}/restore ``` > **Demo implementation:** See [lib/neon/apply-snapshot.ts](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo/blob/main/lib/neon/apply-snapshot.ts) for the complete restore workflow including operation polling and error handling. > **Path parameters:** - `project_id` (string, required): The Neon project ID - `snapshot_id` (string, required): The snapshot ID being restored **Body parameters:** - `name` (string): A name for the newly restored branch. If omitted, a default name is generated - `target_branch_id` (string): The ID of the branch to restore the snapshot into. If not specified, the branch from which the snapshot was originally created will be used. **Set this value to your root branch ID for rollbacks to preserve connection strings** - `finalize_restore` (boolean): Set to `true` to finalize the restore operation immediately. This will complete the restore and move computes from the current branch to the new branch, which keeps the database connection string the same. Defaults to `false`. **Set to `true` when restoring to the active branch for rollbacks, `false` to create preview branches** **Note** Connection warning: Do not connect to the database until current API operations are complete. Any connection attempt before operations finish will either fail or connect to the old database state, not your restored version. #### How restore works Understanding the restore mechanism explains why the connection string remains stable: 1. **New branch creation**: When you restore with `finalize_restore: true`, Neon first creates a new branch from your snapshot. This new branch has a different, system-generated branch ID. 2. **Endpoint transfer**: Neon then transfers the compute endpoint (and its associated connection string) from your original active branch to this newly created branch. 3. **Settings migration**: All branch settings, including its name, are copied to the new active branch, making it appear identical to the old one. Only the branch ID is different. 4. **Branch orphan**: Your original branch becomes "orphaned." It is disconnected from the compute endpoint and renamed by adding an "(old)" suffix (e.g., `main (old)`) to the branch name. **Info** Branch ID changes after restore: The connection string remains stable, but the branch ID changes with every `finalize_restore: true` operation. If you store the branch ID for use in subsequent API calls (e.g., to create the next snapshot), you must retrieve and store the new branch ID after the restore operation completes. #### Rollback workflow Restore any snapshot to your active branch, preserving the connection string: ```json { "target_branch_id": "br-active-branch-123456", // Your root branch ID "finalize_restore": true // Moves computes and preserves connection string } ``` > **Important:** When restoring with `finalize_restore: true`, your previous active branch becomes orphaned and is renamed with `(old)` appended, such as `production (old)` or similar. This orphaned branch is no longer connected to any compute endpoint but preserves your pre-restore state. Delete it during cleanup to avoid unnecessary costs. > After calling the restore API: 1. Extract the array of operation IDs from the API response. 2. For each operation ID, poll the operations endpoint until its status reaches a terminal state (finished, failed, cancelled, or skipped). 3. Do not attempt to connect to the database until all operations are complete. Connections made before completion will point to the old, pre-restore database state. 4. After verifying a successful restore, delete the orphaned branch (e.g., `main (old)`) to avoid incurring storage costs. > See the [poll operation status](https://neon.com/docs/manage/operations#poll-operation-status) documentation for related information. > **Polling operations example:** ```javascript // Poll operation status until complete async function waitForOperation(projectId, operationId) { while (true) { const response = await fetch( `https://console.neon.tech/api/v2/projects/${projectId}/operations/${operationId}`, { headers: { Authorization: `Bearer ${NEON_API_KEY}` } } ); const { status } = await response.json(); // Terminal states - safe to proceed if (['finished', 'skipped', 'cancelled'].includes(status)) { return; } // Error state - handle appropriately if (status === 'failed') { throw new Error('Operation failed'); } // Still running - wait and retry await new Promise((resolve) => setTimeout(resolve, 5000)); } } // After restore API call const restoreResponse = await restoreSnapshot(projectId, snapshotId); const operationIds = restoreResponse.operations.map((op) => op.id); // Wait for all operations to complete for (const id of operationIds) { await waitForOperation(projectId, id); } // NOW safe to connect to the restored database ``` **Potential restore-related problems:** - **Connection to old state**: Ensure all operations completed - **Target branch not found**: Verify branch exists - **Operation timeout**: Retry with longer timeout - **Accumulating orphaned branches**: Delete orphaned branches (e.g., `production (old)`) after successful restore verification #### Preview environments Create a temporary branch to preview a version without affecting the active branch: ```json { "name": "preview-version-123", "finalize_restore": false // Creates new branch for preview without moving computes } ``` This creates a new branch with its own connection string for preview. The active branch remains unchanged. Preview branches should be deleted after use to avoid storage costs. ### Managing snapshots #### List available snapshots Get all snapshots with IDs, names, and timestamps using the [list snapshots endpoint](https://api-docs.neon.tech/reference/listsnapshots): ```bash GET /api/v2/projects/{project_id}/snapshots ``` **Path parameters:** - `project_id` (string, required): The Neon project ID #### Delete snapshot Remove a snapshot using the [delete endpoint](https://api-docs.neon.tech/reference/deletesnapshot): ```bash DELETE /api/v2/projects/{project_id}/snapshots/{snapshot_id} ``` **Path parameters:** - `project_id` (string, required): The Neon project ID - `snapshot_id` (string, required): The snapshot ID #### Update snapshot name Rename a snapshot using the [update endpoint](https://api-docs.neon.tech/reference/updatesnapshot): ```bash PATCH /api/v2/projects/{project_id}/snapshots/{snapshot_id} ``` **Path parameters:** - `project_id` (string, required): The Neon project ID - `snapshot_id` (string, required): The snapshot ID **Body:** ```json { "snapshot": { "name": "important-milestone" } } ``` #### Cleanup strategy Proper cleanup reduces costs and keeps your project manageable: - Delete snapshots when no longer reachable by users - Restoring from a snapshot doesn't lock that snapshot from deletion, unlike branches where creating child branches prevents deleting the parent - Delete orphaned branches created during restores (named like `production (old)`) - These orphaned branches accumulate with each restore and consume storage - Reduces snapshot management fees while shared storage remains - Set `expires_at` for automatic cleanup or delete manually as needed - Consider removing snapshots after merging features or completing rollback testing ## Concepts and terminology ### Neon core concepts - [Project](https://neon.com/docs/reference/glossary#project): Neon project that owns branches, computes, and snapshots, and more - [Branch](https://neon.com/docs/reference/glossary#branch): An isolated database environment with its own data and schema that you can connect to and modify - [Snapshot](https://neon.com/docs/reference/glossary#snapshot): An immutable, point-in-time backup of a branch's schema and data. Read-only until restored - [Root branch](https://neon.com/docs/reference/glossary#root-branch): A branch with no parent (typically named `main` or `production`). The only type of branch from which snapshots can be created - [Operations](https://neon.com/docs/manage/operations#operations-and-the-neon-api): Backend operations that return operation IDs you must poll to completion ### Pattern-specific terminology - **Active branch**: The root branch that serves as your agent's production database (though technically replaced during restores). Connection string never changes even when data is replaced via restore. Preview branches may be created alongside for temporary exploration. - **Version**: A saved database state captured as a snapshot, which may also be referred to a checkpoint or edit. Users create and restore versions through your application interface. - **Orphaned branch**: Created when restoring with `finalize_restore: true`. The previous active branch becomes orphaned (disconnected from compute) and is renamed to `branch-name (old)`. Can be safely deleted after verifying the restore. - **Preview branch**: Temporary branch created from a snapshot for safe exploration, to preview a version ## API quick reference | Operation | Endpoint | Description | | --------------------------------------------------------------------------------------- | -------------------------------------------------------------------- | -------------------------------------------- | | [Create snapshot](https://api-docs.neon.tech/reference/createsnapshot) | `POST /api/v2/projects/{project_id}/branches/{branch_id}/snapshot` | Save current database state as a new version | | [Restore snapshot](https://api-docs.neon.tech/reference/restoresnapshot) | `POST /api/v2/projects/{project_id}/snapshots/{snapshot_id}/restore` | Restore database to a previous version | | [List snapshots](https://api-docs.neon.tech/reference/listsnapshots) | `GET /api/v2/projects/{project_id}/snapshots` | Get all available versions | | [Delete snapshot](https://api-docs.neon.tech/reference/deletesnapshot) | `DELETE /api/v2/projects/{project_id}/snapshots/{snapshot_id}` | Remove a saved version | | [Update snapshot](https://api-docs.neon.tech/reference/updatesnapshot) | `PATCH /api/v2/projects/{project_id}/snapshots/{snapshot_id}` | Rename a version | | [Poll operation](https://api-docs.neon.tech/reference/getprojectoperation) | `GET /api/v2/projects/{project_id}/operations/{operation_id}` | Check restore status | | [List branches](https://api-docs.neon.tech/reference/listprojectbranches) (for cleanup) | `GET /api/v2/projects/{project_id}/branches` | Find orphaned branches to clean up | ## Implementation checklist - [ ] Create one Neon project per agent project - [ ] Designate the root branch (main/production) as the "active" branch - [ ] Store the active branch connection string and branch ID - [ ] Create snapshots at key points to save database versions - [ ] For rollbacks: restore with `finalize_restore: true` and set `target_branch_id` to the root branch ID - [ ] After a rollback, update your stored active branch ID - [ ] For previews: restore with `finalize_restore: false` to create temporary branches - [ ] Poll all operation IDs to terminal states before connecting - [ ] Implement a cleanup strategy: set snapshot expiration dates and delete orphaned branches ## Best practices - **Set `target_branch_id` for rollbacks**: When restoring to the active branch, always specify `target_branch_id` to prevent accidental restores - **Poll operations**: Wait for terminal states before connecting to the database - **Snapshot naming**: Use conventions like `snapshot-{GIT_SHA}-{TIMESTAMP}` or maintain sequential version numbers - **Cleanup strategy**: Set `expires_at` on temporary snapshots and preview branches. Delete orphaned branches (e.g., `production (old)`) created during restores - **Version metadata**: Keep version metadata separate to preserve audit trail across restores ## FAQ ## Summary The active branch pattern with Neon snapshots provides a simple, reliable versioning solution for AI agent and codegen platforms. By keeping database connection strings stable through the restore mechanism and using snapshots to implement version control, you get stable connection strings for your main database, instant rollbacks to previous versions, and the flexibility to create preview branches when needed. The implementation is straightforward: create snapshots to save versions, restore with `finalize_restore: true` to the active branch for rollbacks, or with `finalize_restore: false` for preview branches. Always poll operations to completion before connecting. See the [demo repository](https://github.com/neondatabase-labs/snapshots-as-checkpoints-demo) for a complete example. --- # Source: https://neon.com/llms/ai-ai-google-colab.txt # Google Colab > The document outlines how to integrate and use Neon with Google Colab, detailing steps for setting up a PostgreSQL database connection within the Colab environment for data analysis and machine learning tasks. ## Source - [Google Colab HTML](https://neon.com/docs/ai/ai-google-colab): The original HTML version of this documentation [Google Colab](https://colab.research.google.com/) is a hosted Jupyter Notebook service that requires no setup to use and provides free access to computing resources, including GPUs and TPUs. You can use Google Colab to run python code through the browser. This guide shows how to create a notebook in Colab, connect to a Neon database, install the `pgvector` extension to enabled Neon as a vector store, and run a vector search query. ## Prerequisites To perform the steps in this guide, you require a Neon database for storing vectors. You can use the ready-to-use `neondb` database or create your own. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for instructions. ## Retrieve your database connection string Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal, and select a branch, a user, and the database you want to connect to. A connection string is constructed for you. ## Create a notebook In your browser, navigate to [Google Colab](https://colab.research.google.com/), and click **New notebook**. Alternatively, you can open a predefined Google Colab notebook for this guide by clicking the **Open in Colab** button below. Open In Colab ## Connect to your database 1. In your Colab notebook, create a code block to define your database connection and create a cursor object. Replace `postgresql://[user]:[password]@[neon_hostname]/[dbname]` with the database connection string you retrieved in the previous step. ```python import os import psycopg2 # Provide your Neon connection string connection_string = "postgresql://[user]:[password]@[neon_hostname]/[dbname]" # Connect using the connection string connection = psycopg2.connect(connection_string) # Create a new cursor object cursor = connection.cursor() ``` 2. Execute the code block (**Ctrl** + **Enter**). 3. Add a code block for testing the database connection. ```python # Execute this query to test the database connection cursor.execute("SELECT 1;") result = cursor.fetchone() # Check the query result if result == (1,): print("Your database connection was successful!") else: print("Your connection failed.") ``` 4. Execute the code block (**Ctrl** + **Enter**). ## Install the pgvector extension 1. Create a codeblock to install the `pgvector` extension to enable your Neon database as a vector store: ```python # Execute this query to install the pgvector extension cursor.execute("CREATE EXTENSION IF NOT EXISTS vector;") ``` 2. Execute the code block (**Ctrl** + **Enter**). ## Create a table and add vector data 1. Add a code block to create a table and insert data: ```python create_table_sql = ''' CREATE TABLE items ( id BIGSERIAL PRIMARY KEY, embedding VECTOR(3) ); ''' # Insert data insert_data_sql = ''' INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]'), ('[7,8,9]'); ''' # Execute the SQL statements cursor.execute(create_table_sql) cursor.execute(insert_data_sql) # Commit the changes connection.commit() ``` 2. Execute the code block (**Ctrl** + **Enter**). ## Query your data 1. Add a codeblock to perform a vector similarity search. ```python cursor.execute("SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 3;") all_data = cursor.fetchall() print(all_data) ``` 2. Execute the code block (**Ctrl** + **Enter**). ## Next steps For more information about using Neon with `pgvector`, see [The pgvector extension](https://neon.com/docs/extensions/pgvector). --- # Source: https://neon.com/llms/ai-ai-intro.txt # AI Starter Kit > The AI App Starter Kit documentation introduces Neon users to integrating AI capabilities with their databases, detailing setup instructions, configuration options, and example use cases to enhance data management and analysis. ## Source - [AI Starter Kit HTML](https://neon.com/docs/ai/ai-intro): The original HTML version of this documentation This guide collects resources for building AI applications with Neon Postgres. You'll find core concepts, starter applications, framework integrations, and deployment guides. Use these resources to build applications like RAG chatbots, semantic search engines, or custom AI tools. ## Getting started Learn the fundamentals of building AI applications with Neon: - [AI concepts](https://neon.com/docs/ai/ai-concepts): Learn the fundamentals of embeddings and vector search for AI applications - [pgvector extension](https://neon.com/docs/extensions/pgvector): Get started with pgvector for storing and querying vector embeddings ## AI frameworks and integrations Build AI applications faster with these popular frameworks, tools, and services: - [LangChain](https://neon.com/docs/ai/langchain): Create AI applications using LangChain with OpenAI and Neon - [LlamaIndex](https://neon.com/docs/ai/llamaindex): Build RAG applications using LlamaIndex with OpenAI and Neon - [Semantic Kernel](https://neon.com/docs/ai/semantic-kernel): Develop AI applications using Semantic Kernel with Azure OpenAI - [Inngest](https://neon.com/docs/ai/inngest): Build reliable AI workflows with Inngest and Neon - [app.build](https://neon.com/docs/ai/ai-app-build): Generate and deploy web applications using the open-source app.build agent ## Starter applications Hackable, fully-featured, pre-built starter apps to get you up and running: - [AI chatbot (OpenAI + LllamIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/chatbot-nextjs): A Next.js AI chatbot starter app built with OpenAI and LlamaIndex - [AI chatbot (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/chatbot-nextjs): A Next.js AI chatbot starter app built with OpenAI and LangChain - [RAG chatbot (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/rag-nextjs): A Next.js RAG chatbot starter app built with OpenAI and LlamaIndex - [RAG chatbot (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/rag-nextjs): A Next.js RAG chatbot starter app built with OpenAI and LangChain - [Semantic search (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/semantic-search-nextjs): A Next.js Semantic Search chatbot starter app built with OpenAI and LlamaIndex - [Semantic search (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/semantic-search-nextjs): A Next.js Semantic Search chatbot starter app built with OpenAI and LangChain - [Hybrid search (OpenAI)](https://github.com/neondatabase/examples/tree/main/ai/hybrid-search-nextjs): A Next.js Hybrid Search starter app built with OpenAI - [Reverse image search (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/reverse-image-search-nextjs): A Next.js Reverse Image Search Engine starter app built with OpenAI and LlamaIndex - [Chat with PDF (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/chat-with-pdf-nextjs): A Next.js Chat with PDF chatbot starter app built with OpenAI and LlamaIndex - [Chat with PDF (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/chat-with-pdf-nextjs): A Next.js Chat with PDF chatbot starter app built with OpenAI and LangChain ## Scale your AI application - [Scale with Neon](https://neon.com/docs/ai/ai-scale-with-neon): Learn how to scale your AI application with Autoscaling and Read Replicas - [Optimize vector search](https://neon.com/docs/ai/ai-vector-search-optimization): Best practices for optimizing vector search performance ## Featured examples Real-world AI applications built with Neon that you can reference as code examples or inspiration. **Tip** Built something cool?: Share your AI app on our [#showcase](https://discord.gg/neon) channel on Discord. - [AI vector database per tenant](https://github.com/neondatabase/ai-vector-db-per-tenant): Deploy an AI vector database per-tenant architecture with Neon - [Guide: Build a RAG chatbot](https://neon.com/guides/chatbot-astro-postgres-llamaindex): Build a RAG chatbot in an Astro application with LlamaIndex and Postgres - [Guide: Build a Reverse Image Search Engine](https://neon.com/guides/llamaindex-postgres-search-images): Using LlamaIndex with Postgres to Build your own Reverse Image Search Engine - [Ask Neon Chatbot](https://github.com/neondatabase/ask-neon): An Ask Neon AI-powered chatbot built with pgvector - [Vercel Postgres pgvector Starter](https://vercel.com/templates/next.js/postgres-pgvector): Enable vector similarity search with Vercel Postgres powered by Neon - [YCombinator Semantic Search App](https://github.com/neondatabase/yc-idea-matcher): YCombinator semantic search application - [Web-based AI SQL Playground](https://github.com/neondatabase/postgres-ai-playground): An AI-enabled SQL playground application for natural language queries - [Jupyter Notebook for vector search with Neon](https://github.com/neondatabase/neon-vector-search-openai-notebooks): Jupyter Notebook for vector search with Neon, pgvector, and OpenAI - [Image search with Neon and Vertex AI](https://github.com/ItzCrazyKns/Neon-Image-Search): Community: An image search app built with Neon and Vertex AI - [Text-to-SQL conversion with Mistral + LangChain](https://github.com/mistralai/cookbook/blob/main/third_party/Neon/neon_text_to_sql.ipynb): A Text-to-SQL conversion app built with Mistral AI, Neon, and LangChain - [Postgres GPT Expert](https://neon.com/blog/openais-gpt-store-is-live-create-and-publish-a-custom-postgres-gpt-expert): Blog + repo: Create and publish a custom Postgres GPT Expert using OpenAI's GPT ## Vector search tools and notebooks Optimize your vector search implementation and experiment with different approaches: - [Vector search optimization](https://neon.com/docs/ai/ai-vector-search-optimization): Best practices for optimizing vector search performance - [Vector search notebooks](https://github.com/neondatabase/neon-vector-search-openai-notebooks): Interactive notebooks for vector search with OpenAI - [Google Colab guide](https://neon.com/docs/ai/ai-google-colab): Use Neon with Google Colab for ML experiments - [Azure Data Studio Notebooks](https://neon.com/docs/ai/ai-azure-notebooks): A cloud-based Jupyter notebook service integrated with Azure Data Studio --- # Source: https://neon.com/llms/ai-ai-rules-neon-api.txt # AI Rules: Neon API > The "AI Rules: Neon API" document outlines the guidelines and specifications for integrating AI functionalities within the Neon API, detailing the necessary parameters and protocols for developers to effectively implement AI-driven features in their applications. ## Source - [AI Rules: Neon API HTML](https://neon.com/docs/ai/ai-rules-neon-api): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Neon API](https://neon.com/docs/reference/api-reference) Repository: - [Neon API Reference Docs](https://api-docs.neon.tech/reference/getting-started-with-neon-api) - [neon-api-guidelines.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-api-guidelines.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-api-guidelines.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon API code. You can also reference them explicitly in prompts. ## How to use You can use the following Neon API rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/file_name.mdc` and they'll be automatically applied when working with matching files. For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon API. You can also reference them explicitly in prompts. ## Neon API rules: General guidelines Save the following content to a file named `neon-api-guidelines.mdc` in your AI tool's rules directory. ````md --- description: Use these rules to understand how to interact with the Neon API, including authentication, rate limiting, and best practices. alwaysApply: false --- ## Overview This document provides a comprehensive set of rules and guidelines for an AI agent to interact with the Neon API. The Neon API is a RESTful service that allows for programmatic management of all Neon resources. Adherence to these rules ensures correct, efficient, and safe API usage. ### General API guidelines All Neon API requests must be made to the following base URL: ``` https://console.neon.tech/api/v2/ ``` To construct a full request URL, append the specific endpoint path to this base URL. ### Authentication - All API requests must be authenticated using a Neon API key. - The API key must be included in the `Authorization` header using the `Bearer` authentication scheme. - The header should be formatted as: `Authorization: Bearer $NEON_API_KEY`, where `$NEON_API_KEY` is a valid Neon API key. - A request without a valid `Authorization` header will fail with a `401 Unauthorized` status code. ### API rate limiting - Neon limits API requests to 700 requests per minute (approximately 11 per second). - Bursts of up to 40 requests per second per route are permitted. - If the rate limit is exceeded, the API will respond with an `HTTP 429 Too Many Requests` error. - Your application logic must handle `429` errors and implement a retry strategy with appropriate backoff. ### Neon Core Concepts To effectively use the Neon Python SDK, it's essential to understand the hierarchy and purpose of its core resources. The following table provides a high-level overview of each concept. | Concept | Description | Analogy/Purpose | Key Relationship | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | Organization | The highest-level container, managing billing, users, and multiple projects. | A GitHub Organization or a company's cloud account. | Contains one or more Projects. | | Project | The primary container that contains all related database resources for a single application or service. | A Git repository or a top-level folder for an application. | Lives within an Organization (or a personal account). Contains Branches. | | Branch | A lightweight, copy-on-write clone of a database's state at a specific point in time. | A `git branch`. Used for isolated development, testing, staging, or previews without duplicating storage costs. | Belongs to a Project. Contains its own set of Databases and Roles, cloned from its parent. | | Compute Endpoint | The actual running PostgreSQL instance that you connect to. It provides the CPU and RAM for processing queries. | The "server" or "engine" for your database. It can be started, suspended (scaled to zero), and resized. | Is attached to a single Branch. Your connection string points to a Compute Endpoint's hostname. | | Database | A logical container for your data (tables, schemas, views) within a branch. It follows standard PostgreSQL conventions. | A single database within a PostgreSQL server instance. | Exists within a Branch. A branch can have multiple databases. | | Role | A PostgreSQL role used for authentication (logging in) and authorization (permissions to access data). | A database user account with a username and password. | Belongs to a Branch. Roles from a parent branch are copied to child branches upon creation. | | API Key | A secret token used to authenticate requests to the Neon API. Keys have different scopes (Personal, Organization, Project-scoped). | A password for programmatic access, allowing you to manage all other Neon resources. | Authenticates actions on Organizations, Projects, Branches, etc. | | Operation | An asynchronous action performed by the Neon control plane, such as creating a branch or starting a compute. | A background job or task. Its status can be polled to know when an action is complete. | Associated with a Project and often a specific Branch or Endpoint. Essential for scripting API calls. | ### Understanding API key types When performing actions via the API, you must select the correct type of API key based on the required scope and permissions. There are three types: 1. Personal API Key - Scope: Accesses all projects that the user who created the key is a member of. - Permissions: The key has the same permissions as its owner. If the user's access is revoked from an organization, the key loses access too. - Best For: Individual use, scripting, and tasks tied to a specific user's permissions. - Created By: Any user. 2. Organization API Key - Scope: Accesses all projects and resources within an entire organization. - Permissions: Has admin-level access across the organization, independent of any single user. It remains valid even if the creator leaves the organization. - Best For: CI/CD pipelines, organization-wide automation, and service accounts that need broad access. - Created By: Organization administrators only. 3. Project-scoped API Key - Scope: Access is strictly limited to a single, specified project. - Permissions: Cannot perform organization-level actions (like creating new projects) or delete the project it is scoped to. This is the most secure and limited key type. - Best For: Project-specific integrations, third-party services, or automation that should be isolated to one project. - Created By: Any organization member. ```` ## Neon API rules: Manage API keys Save the following content to a file named `neon-api-keys.mdc` in your AI tool's rules directory. ````md --- description: Use these rules to manage Neon API keys programmatically, including creating, listing, and revoking keys. alwaysApply: false --- ## Overview This document outlines the rules for managing Neon API keys programmatically. It covers listing existing keys, creating new keys, and revoking keys. ### Important note on creating API keys To create new API keys using the API, you must already possess a valid Personal API Key. The first key must be created from the Neon Console. You can ask the user to create one for you if you do not have one. ### List API keys - Endpoint: `GET /api_keys` - Authorization: Use a Personal API Key. Example request: ```bash curl "https://console.neon.tech/api/v2/api_keys" \ -H "Authorization: Bearer $PERSONAL_API_KEY" ``` Example response: ```json [ { "id": 2291506, "name": "my-personal-key", "created_at": "2025-09-10T09:44:04Z", "created_by": { "id": "487de658-08ba-4363-b387-86d18b9ad1c8", "name": "", "image": "" }, "last_used_at": "2025-09-10T09:44:09Z", "last_used_from_addr": "49.43.218.132,34.211.200.85" } ] ``` ### Create an API key - Endpoint: `POST /api_keys` - Authorization: Use a Personal API Key. - Body: Must include a `key_name`. Example request: ```bash curl https://console.neon.tech/api/v2/api_keys \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $PERSONAL_API_KEY" \ -d '{"key_name": "my-new-key"}' ``` Example response: ```json { "id": 2291515, "key": "napi_9tlr13774gizljemrr133j5koy3bmsphj8iu38mh0yjl9q4r1b0jy2wuhhuxouzr", "name": "my-new-key", "created_at": "2025-09-10T09:47:59Z", "created_by": "487de658-08ba-4363-b387-86d18b9ad1c8" } ``` ### Revoke an API key - Endpoint: `DELETE /api_keys/{key_id}` - Authorization: Use a Personal API Key. Example request: ```bash curl -X DELETE \ 'https://console.neon.tech/api/v2/api_keys/2291515' \ -H "Authorization: Bearer $PERSONAL_API_KEY" ``` Example response: ```json { "id": 2291515, "name": "mynewkey", "created_at": "2025-09-10T09:47:59Z", "created_by": "487de658-08ba-4363-b387-86d18b9ad1c8", "last_used_at": "2025-09-10T09:53:01Z", "last_used_from_addr": "2405:201:c01f:7013:d962:2b4f:2740:9750", "revoked": true } ``` ```` ## Neon API rules: Manage operations Save the following content to a file named `neon-api-operations.mdc` in your AI tool's rules directory. ````md --- description: Use these rules to manage and monitor long-running operations in Neon, such as branch creation and compute management. alwaysApply: false --- ## Overview This document outlines the rules for managing and monitoring long-running operations in Neon, including branch creation and compute management. ## Operations An operation is an action performed by the Neon Control Plane (e.g., `create_branch`, `start_compute`). When using the API programmatically, it is crucial to monitor the status of long-running operations to ensure one has completed before starting another that depends on it. Operations older than 6 months may be deleted from Neon's systems. ### List operations 1. Action: Retrieves a list of operations for the specified Neon project. The number of operations can be large, so pagination is recommended. 2. Endpoint: `GET /projects/{project_id}/operations` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project whose operations you want to list. 4. Query Parameters: - `limit` (integer, optional): The number of operations to return in the response. Must be between 1 and 1000. - `cursor` (string, optional): The cursor value from a previous response to fetch the next page of operations. 5. Procedure: - Make an initial request with a `limit` to get the first page of results. - The response will contain a `pagination.cursor` value. - To get the next page, make a subsequent request including both the `limit` and the `cursor` from the previous response. Example request ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/operations' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response ```json { "operations": [ { "id": "639f7f73-0b76-4749-a767-2d3c627ca5a6", "project_id": "hidden-river-50598307", "branch_id": "br-long-feather-adpbgzlx", "endpoint_id": "ep-round-morning-adtpn2oc", "action": "apply_config", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:15:23Z", "updated_at": "2025-09-10T12:15:23Z", "total_duration_ms": 87 }, { "id": "b5a7882b-a5b3-4292-ad27-bffe733feae4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:15:23Z", "updated_at": "2025-09-10T12:15:23Z", "total_duration_ms": 49 }, { "id": "36a1cba0-97f1-476d-af53-d9e0d3a3606d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "start_compute", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:15:05Z", "total_duration_ms": 913 }, { "id": "409c35ef-cbc3-4f1b-a4ca-f2de319f5360", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "action": "create_branch", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:15:04Z", "total_duration_ms": 136 }, { "id": "274e240f-e2fb-4719-b796-c1ab7c4ae91c", "project_id": "hidden-river-50598307", "branch_id": "br-long-feather-adpbgzlx", "endpoint_id": "ep-round-morning-adtpn2oc", "action": "start_compute", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:15:03Z", "total_duration_ms": 4843 }, { "id": "22ef6fbd-21c5-4cdb-9825-b0f9afddbb0d", "project_id": "hidden-river-50598307", "branch_id": "br-long-feather-adpbgzlx", "action": "create_timeline", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:15:01Z", "total_duration_ms": 3096 } ], "pagination": { "cursor": "2025-09-10T12:14:58.848485Z" } } ``` ### Retrieve operation details 1. Action: Retrieves the details and status of a single, specified operation. The `operation_id` is found in the response body of the initial API call that initiated it, or by listing operations. 2. Endpoint: `GET /projects/{project_id}/operations/{operation_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project where the operation occurred. - `operation_id` (UUID, required): The unique identifier of the operation. This ID is returned in the response body of the API call that initiated the operation. Example request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/operations/274e240f-e2fb-4719-b796-c1ab7c4ae91c' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response: ```json { "operation": { "id": "274e240f-e2fb-4719-b796-c1ab7c4ae91c", "project_id": "hidden-river-50598307", "branch_id": "br-long-feather-adpbgzlx", "endpoint_id": "ep-round-morning-adtpn2oc", "action": "start_compute", "status": "finished", "failures_count": 0, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:15:03Z", "total_duration_ms": 4843 } } ``` ```` ## Neon API rules: Manage projects Save the following content to a file named `neon-api-projects.mdc` in your AI tool's rules directory. ````md --- description: Use these rules to manage Neon projects programmatically, including creating, listing, updating, and deleting projects. alwaysApply: false --- ## Overview This document outlines the rules for managing Neon projects programmatically. It covers creation, retrieval, updates, and deletion. ## Manage projects ### List projects 1. Action: Retrieves a list of all projects accessible to the account associated with the API key. This is the primary method for obtaining `project_id` values required for other API calls. 2. Endpoint: `GET /projects` 3. Query Parameters: - `limit` (optional, integer, default: 10): Specifies the number of projects to return, from 1 to 400. - `cursor` (optional, string): Used for pagination. Provide the `cursor` value from a previous response to fetch the next set of projects. - `search` (optional, string): Filters projects by a partial match on the project `name` or `id`. - `org_id` (optional, string): Filters projects by a specific organization ID. 4. When iterating through all projects, use a combination of the `limit` and `cursor` parameters to handle pagination correctly. Example request: ```bash # Retrieve the first 10 projects curl 'https://console.neon.tech/api/v2/projects?limit=10' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response: ```json { "projects": [ { "id": "old-fire-32990194", "platform_id": "aws", "region_id": "aws-ap-southeast-1", "name": "old-fire-32990194", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [5], "start_time": "19:00", "end_time": "20:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "ap-southeast-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "active_time": 0, "cpu_used_sec": 0, "creation_source": "console", "created_at": "2025-09-10T06:58:33Z", "updated_at": "2025-09-10T06:58:39Z", "synthetic_storage_size": 0, "quota_reset_at": "2025-10-01T00:00:00Z", "owner_id": "org-royal-sun-91776391", "compute_last_active_at": "2025-09-10T06:58:38Z", "org_id": "org-royal-sun-91776391", "history_retention_seconds": 86400 } ], "pagination": { "cursor": "old-fire-32990194" }, "applications": {}, "integrations": {} } ``` ### Create project 1. Action: Creates a new Neon project. You can specify a wide range of settings at creation time, including the region, Postgres version, default branch and compute configurations, and security settings. 2. Endpoint: `POST /projects` 3. Body Parameters: The request body must contain a top-level `project` object with the following nested attributes: `project` (object, required): The main container for all project settings. - `name` (string, optional): A descriptive name for the project (1-256 characters). If omitted, the project name will be identical to its generated ID. - `pg_version` (integer, optional): The major Postgres version. Defaults to `17`. Supported versions: 14, 15, 16, 17, 18. - `region_id` (string, optional): The identifier for the region where the project will be created (e.g., `aws-us-east-1`). - `org_id` (string, optional): The ID of an organization to which the project will belong. Required if using an Organization API key. - `store_passwords` (boolean, optional): Whether to store role passwords in Neon. Storing passwords is required for features like the SQL Editor and integrations. - `history_retention_seconds` (integer, optional): The duration in seconds (0 to 2,592,000) to retain project history for features like Point-in-Time Restore. Defaults to 86400 (1 day). - `provisioner` (string, optional): The compute provisioner. Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `default_endpoint_settings` (object, optional): Default settings for new compute endpoints created in this project. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum value is `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum value is `0.25`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). - `settings` (object, optional): Project-wide settings. - `quota` (object, optional): Per-project consumption quotas. A zero or empty value means "unlimited". - `active_time_seconds` (integer, optional): Wall-clock time allowance for active computes. - `compute_time_seconds` (integer, optional): CPU seconds allowance. - `written_data_bytes` (integer, optional): Data written allowance. - `data_transfer_bytes` (integer, optional): Data transferred allowance. - `logical_size_bytes` (integer, optional): Logical data size limit per branch. - `allowed_ips` (object, optional): Configures the IP Allowlist. - `ips` (array of strings, optional): A list of allowed IP addresses or CIDR ranges. - `protected_branches_only` (boolean, optional): If `true`, the IP allowlist applies only to protected branches. - `enable_logical_replication` (boolean, optional): Sets `wal_level=logical`. - `maintenance_window` (object, optional): The time period for scheduled maintenance. - `weekdays` (array of integers, required if `maintenance_window` is set): Days of the week (1=Monday, 7=Sunday). - `start_time` (string, required if `maintenance_window` is set): Start time in "HH:MM" UTC format. - `end_time` (string, required if `maintenance_window` is set): End time in "HH:MM" UTC format. - `branch` (object, optional): Configuration for the project's default branch. - `name` (string, optional): The name for the default branch. Defaults to `main`. - `role_name` (string, optional): The name for the default role. Defaults to `{database_name}_owner`. - `database_name` (string, optional): The name for the default database. Defaults to `neondb`. Example request ```bash curl -X POST 'https://console.neon.tech/api/v2/projects' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "project": { "name": "my-new-api-project", "pg_version": 17 } }' ``` Example response ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "cpu_used_sec": 0, "id": "sparkling-hill-99143322", "platform_id": "aws", "region_id": "aws-us-west-2", "name": "my-new-api-project", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [5], "start_time": "07:00", "end_time": "08:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-west-2.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z", "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "org-royal-sun-91776391", "org_id": "org-royal-sun-91776391" }, "connection_uris": [ { "connection_uri": "postgresql://neondb_owner:npg_N67FDMtGvJke@ep-round-unit-afbn7qv4.c-2.us-west-2.aws.neon.tech/neondb?sslmode=require", "connection_parameters": { "database": "neondb", "password": "npg_N67FDMtGvJke", "role": "neondb_owner", "host": "ep-round-unit-afbn7qv4.c-2.us-west-2.aws.neon.tech", "pooler_host": "ep-round-unit-afbn7qv4-pooler.c-2.us-west-2.aws.neon.tech" } } ], "roles": [ { "branch_id": "br-green-mode-afe3fl9y", "name": "neondb_owner", "password": "npg_N67FDMtGvJke", "protected": false, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z" } ], "databases": [ { "id": 6677853, "branch_id": "br-green-mode-afe3fl9y", "name": "neondb", "owner_name": "neondb_owner", "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z" } ], "operations": [ { "id": "08b9367d-6918-4cd5-b4a6-41c8fd984b7e", "project_id": "sparkling-hill-99143322", "branch_id": "br-green-mode-afe3fl9y", "action": "create_timeline", "status": "running", "failures_count": 0, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z", "total_duration_ms": 0 }, { "id": "c6917f04-5cd3-48a2-97c9-186b1d9729f0", "project_id": "sparkling-hill-99143322", "branch_id": "br-green-mode-afe3fl9y", "endpoint_id": "ep-round-unit-afbn7qv4", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z", "total_duration_ms": 0 } ], "branch": { "id": "br-green-mode-afe3fl9y", "project_id": "sparkling-hill-99143322", "name": "main", "current_state": "init", "pending_state": "ready", "state_changed_at": "2025-09-10T07:58:16Z", "creation_source": "console", "primary": true, "default": true, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z", "init_source": "parent-data" }, "endpoints": [ { "host": "ep-round-unit-afbn7qv4.c-2.us-west-2.aws.neon.tech", "id": "ep-round-unit-afbn7qv4", "project_id": "sparkling-hill-99143322", "branch_id": "br-green-mode-afe3fl9y", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "region_id": "aws-us-west-2", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:16Z", "proxy_host": "c-2.us-west-2.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ] } ``` ### Retrieve project details 1. Action: Retrieves detailed information about a single, specific project. 2. Endpoint: `GET /projects/{project_id}` 3. Prerequisite: You must have the `project_id` of the project you wish to retrieve. 4. Path Parameters: - `project_id` (required, string): The unique identifier of the project. Example request: ```bash curl 'https://console.neon.tech/api/v2/projects/sparkling-hill-99143322' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "cpu_used_sec": 0, "id": "sparkling-hill-99143322", "platform_id": "aws", "region_id": "aws-us-west-2", "name": "my-new-api-project", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [5], "start_time": "07:00", "end_time": "08:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-west-2.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T07:58:25Z", "synthetic_storage_size": 0, "consumption_period_start": "2025-09-10T06:58:15Z", "consumption_period_end": "2025-10-01T00:00:00Z", "owner_id": "org-royal-sun-91776391", "owner": { "email": "", "name": "My Personal Account", "branches_limit": 10, "subscription_type": "free_v3" }, "compute_last_active_at": "2025-09-10T07:58:21Z", "org_id": "org-royal-sun-91776391" } } ``` ### Update a project 1. Action: Updates the settings of a specified project. This endpoint is used to modify a wide range of project attributes after creation, such as its name, default compute settings, security policies, and maintenance schedules. 2. Endpoint: `PATCH /projects/{project_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project to update. 4. Body Parameters: The request body must contain a top-level `project` object with the attributes to be updated. `project` (object, required): The main container for the settings you want to modify. - `name` (string, optional): A new descriptive name for the project. - `history_retention_seconds` (integer, optional): The duration in seconds (0 to 2,592,000) to retain project history. - `default_endpoint_settings` (object, optional): New default settings for compute endpoints created in this project. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum `0.25`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). - `settings` (object, optional): Project-wide settings to update. - `quota` (object, optional): Per-project consumption quotas. - `active_time_seconds` (integer, optional): Wall-clock time allowance for active computes. - `compute_time_seconds` (integer, optional): CPU seconds allowance. - `written_data_bytes` (integer, optional): Data written allowance. - `data_transfer_bytes` (integer, optional): Data transferred allowance. - `logical_size_bytes` (integer, optional): Logical data size limit per branch. - `allowed_ips` (object, optional): Modifies the IP Allowlist. - `ips` (array of strings, optional): The new list of allowed IP addresses or CIDR ranges. - `protected_branches_only` (boolean, optional): If `true`, the IP allowlist applies only to protected branches. - `enable_logical_replication` (boolean, optional): Sets `wal_level=logical`. This is irreversible. - `maintenance_window` (object, optional): The time period for scheduled maintenance. - `weekdays` (array of integers, required if `maintenance_window` is set): Days of the week (1=Monday, 7=Sunday). - `start_time` (string, required if `maintenance_window` is set): Start time in "HH:MM" UTC format. - `end_time` (string, required if `maintenance_window` is set): End time in "HH:MM" UTC format. - `block_public_connections` (boolean, optional): If `true`, disallows connections from the public internet. - `block_vpc_connections` (boolean, optional): If `true`, disallows connections from VPC endpoints. - `audit_log_level` (string, optional): Sets the audit log level. Allowed values: `base`, `extended`, `full`. - `hipaa` (boolean, optional): Toggles HIPAA compliance settings. - `preload_libraries` (object, optional): Libraries to preload into compute instances. - `use_defaults` (boolean, optional): Toggles the use of default libraries. - `enabled_libraries` (array of strings, optional): A list of specific libraries to enable. Example request ```bash curl -X PATCH 'https://console.neon.tech/api/v2/projects/sparkling-hill-99143322' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "project": { "name": "updated-project-name" } }' ``` Example response ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 29060360, "compute_time_seconds": 79, "active_time_seconds": 308, "cpu_used_sec": 79, "id": "sparkling-hill-99143322", "platform_id": "aws", "region_id": "aws-us-west-2", "name": "updated-project-name", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [5], "start_time": "07:00", "end_time": "08:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-west-2.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T08:08:23Z", "synthetic_storage_size": 0, "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "org-royal-sun-91776391", "compute_last_active_at": "2025-09-10T07:58:21Z" }, "operations": [] } ``` ### Delete project 1. Action: Permanently deletes a project and all of its associated resources, including all branches, computes, databases, and roles. 2. Endpoint: `DELETE /projects/{project_id}` 3. Prerequisite: You must have the `project_id` of the project you wish to delete. 4. Warning: This is a destructive action that cannot be undone. It deletes all data, databases, and resources in the project. Proceed with extreme caution and confirm with the user before executing this operation. 5. Path Parameters: - `project_id` (required, string): The unique identifier of the project to be deleted. Example request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/sparkling-hill-99143322' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response: ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 29060360, "compute_time_seconds": 79, "active_time_seconds": 308, "cpu_used_sec": 79, "id": "sparkling-hill-99143322", "platform_id": "aws", "region_id": "aws-us-west-2", "name": "updated-project-name", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [5], "start_time": "07:00", "end_time": "08:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-west-2.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-09-10T07:58:16Z", "updated_at": "2025-09-10T08:08:23Z", "synthetic_storage_size": 0, "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "org-royal-sun-91776391", "compute_last_active_at": "2025-09-10T07:58:21Z", "org_id": "org-royal-sun-91776391" } } ``` ### Retrieve connection URI 1. Action: Retrieves a ready-to-use connection URI for a specific database within a project. 2. Endpoint: `GET /projects/{project_id}/connection_uri` 3. Prerequisites: You must know the `project_id`, `database_name`, and `role_name`. 4. Query Parameters: - `project_id` (path, required): The unique identifier of the project. - `database_name` (query, required): The name of the target database. - `role_name` (query, required): The role to use for the connection. - `branch_id` (query, optional): The branch ID. Defaults to the project's primary branch if not specified. - `pooled` (query, optional, boolean): If set to `false`, returns a direct connection URI instead of a pooled one. Defaults to `true`. - `endpoint_id` (query, optional): The specific endpoint ID to connect to. Defaults to the `read-write` endpoint_id associated with the `branch_id` if not specified. Example request: ```bash curl 'https://console.neon.tech/api/v2/projects/old-fire-32990194/connection_uri?database_name=neondb&role_name=neondb_owner' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response: ```json { "uri": "postgresql://neondb_owner:npg_IDNnorOST71P@ep-shiny-morning-a1bfdvjs-pooler.ap-southeast-1.aws.neon.tech/neondb?channel_binding=require&sslmode=require" } ``` ```` ## Neon API Rules: Manage Branches Save the following content to a file named `neon-api-branches.mdc` in your AI tool's rules directory. ````md --- description: This section provides detailed rules for managing branches within a Neon project. alwaysApply: false --- ## Overview This document outlines the rules for managing branches in a Neon project using the Neon API. ## Manage branches ### Create branch 1. Action: Creates a new branch within a specified project. By default, a branch is created from the project's default branch, but you can specify a parent branch, a point-in-time (LSN or timestamp), and attach compute endpoints. 2. Endpoint: `POST /projects/{project_id}/branches` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project where the branch will be created. 4. Body Parameters: The request body is optional. If provided, it can contain `endpoints` and/or `branch` objects. `endpoints` (array of objects, optional): A list of compute endpoints to create and attach to the new branch. - `type` (string, required): The endpoint type. Allowed values: `read_write`, `read_only`. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum value is `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum value is `0.25`. - `provisioner` (string, optional): The compute provisioner. Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). `branch` (object, optional): Specifies the properties of the new branch. - `name` (string, optional): A name for the branch (max 256 characters). If omitted, a name is auto-generated. - `parent_id` (string, optional): The ID of the parent branch. If omitted, the project's default branch is used as the parent. - `parent_lsn` (string, optional): A Log Sequence Number (LSN) from the parent branch to create the new branch from a specific point-in-time. - `parent_timestamp` (string, optional): An ISO 8601 timestamp (e.g., `2025-08-26T12:00:00Z`) to create the branch from a specific point-in-time. - `protected` (boolean, optional): If `true`, the branch is created as a protected branch. - `init_source` (string, optional): The source for branch initialization. `parent-data` (default) copies schema and data. `schema-only` creates a new root branch with only the schema from the specified parent. - `expires_at` (string, optional): An RFC 3339 timestamp for when the branch should be automatically deleted (e.g., `2025-06-09T18:02:16Z`). Example: Create a branch from a specific parent with a read-write compute ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoints": [ { "type": "read_write" } ], "branch": { "parent_id": "br-super-wildflower-adniii9u", "name": "my-new-feature-branch" } }' ``` Example response ```json { "branch": { "id": "br-damp-glitter-adqd4hk5", "project_id": "hidden-river-50598307", "parent_id": "br-super-wildflower-adniii9u", "parent_lsn": "0/1A7F730", "name": "my-new-feature-branch", "current_state": "init", "pending_state": "ready", "state_changed_at": "2025-09-10T16:45:52Z", "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:45:52Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, "endpoints": [ { "host": "ep-raspy-glade-ad8e3gvy.c-2.us-east-1.aws.neon.tech", "id": "ep-raspy-glade-ad8e3gvy", "project_id": "hidden-river-50598307", "branch_id": "br-damp-glitter-adqd4hk5", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:45:52Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ], "operations": [ { "id": "cf5d0923-fc13-4125-83d5-8fc31c6b0214", "project_id": "hidden-river-50598307", "branch_id": "br-damp-glitter-adqd4hk5", "action": "create_branch", "status": "running", "failures_count": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:45:52Z", "total_duration_ms": 0 }, { "id": "e3c60b62-00c8-4ad4-9cd1-cdc3e8fd8154", "project_id": "hidden-river-50598307", "branch_id": "br-damp-glitter-adqd4hk5", "endpoint_id": "ep-raspy-glade-ad8e3gvy", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:45:52Z", "total_duration_ms": 0 } ], "roles": [ { "branch_id": "br-damp-glitter-adqd4hk5", "name": "neondb_owner", "protected": false, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:14:58Z" } ], "databases": [ { "id": 9554148, "branch_id": "br-damp-glitter-adqd4hk5", "name": "neondb", "owner_name": "neondb_owner", "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:14:58Z" } ], "connection_uris": [ { "connection_uri": "postgresql://neondb_owner:npg_EwcS9IOgFfb7@ep-raspy-glade-ad8e3gvy.c-2.us-east-1.aws.neon.tech/neondb?sslmode=require", "connection_parameters": { "database": "neondb", "password": "npg_EwcS9IOgFfb7", "role": "neondb_owner", "host": "ep-raspy-glade-ad8e3gvy.c-2.us-east-1.aws.neon.tech", "pooler_host": "ep-raspy-glade-ad8e3gvy-pooler.c-2.us-east-1.aws.neon.tech" } } ] } ``` ### List branches 1. Action: Retrieves a list of branches for the specified project. Supports filtering, sorting, and pagination. 2. Endpoint: `GET /projects/{project_id}/branches` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. 4. Query Parameters: - `search` (string, optional): Filters branches by a partial match on name or ID. - `sort_by` (string, optional): The field to sort by. Allowed values: `name`, `created_at`, `updated_at`. Defaults to `updated_at`. - `sort_order` (string, optional): The sort order. Allowed values: `asc`, `desc`. Defaults to `desc`. - `limit` (integer, optional): The number of branches to return (1 to 10000). - `cursor` (string, optional): The cursor from a previous response for pagination. Example: List all branches sorted by creation date ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches?sort_by=created_at&sort_order=asc' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example response ```json { "branches": [ { "id": "br-long-feather-adpbgzlx", "project_id": "hidden-river-50598307", "name": "production", "current_state": "ready", "state_changed_at": "2025-09-10T12:15:01Z", "logical_size": 30785536, "creation_source": "console", "primary": true, "default": true, "protected": false, "cpu_used_sec": 82, "compute_time_seconds": 82, "active_time_seconds": 316, "written_data_bytes": 29060360, "data_transfer_bytes": 0, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:35:33Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, { "id": "br-super-wildflower-adniii9u", "project_id": "hidden-river-50598307", "parent_id": "br-long-feather-adpbgzlx", "parent_lsn": "0/1A33BC8", "parent_timestamp": "2025-09-10T12:15:03Z", "name": "development", "current_state": "ready", "state_changed_at": "2025-09-10T12:15:04Z", "logical_size": 30842880, "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 78, "compute_time_seconds": 78, "active_time_seconds": 312, "written_data_bytes": 310120, "data_transfer_bytes": 0, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:35:33Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, { "id": "br-damp-glitter-adqd4hk5", "project_id": "hidden-river-50598307", "parent_id": "br-super-wildflower-adniii9u", "parent_lsn": "0/1A7F730", "parent_timestamp": "2025-09-10T12:15:05Z", "name": "my-new-feature-branch", "current_state": "ready", "state_changed_at": "2025-09-10T16:45:52Z", "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:45:53Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" } ], "annotations": { "br-long-feather-adpbgzlx": { "object": { "type": "console/branch", "id": "br-long-feather-adpbgzlx" }, "value": { "environment": "production" }, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:14:58Z" }, "br-super-wildflower-adniii9u": { "object": { "type": "console/branch", "id": "br-super-wildflower-adniii9u" }, "value": { "environment": "development" }, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:15:04Z" } }, "pagination": { "sort_by": "created_at", "sort_order": "ASC" } } ``` ### Retrieve branch details 1. Action: Retrieves detailed information about a specific branch, including its parent, creation timestamp, and state. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "branch": { "id": "br-super-wildflower-adniii9u", "project_id": "hidden-river-50598307", "parent_id": "br-long-feather-adpbgzlx", "parent_lsn": "0/1A33BC8", "parent_timestamp": "2025-09-10T12:15:03Z", "name": "development", "current_state": "ready", "state_changed_at": "2025-09-10T12:15:04Z", "logical_size": 30842880, "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 78, "compute_time_seconds": 78, "active_time_seconds": 312, "written_data_bytes": 310120, "data_transfer_bytes": 0, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:35:33Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, "annotation": { "object": { "type": "console/branch", "id": "br-super-wildflower-adniii9u" }, "value": { "environment": "development" }, "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:15:04Z" } } ``` ### Update branch 1. Action: Updates the properties of a specified branch, such as its name, protection status, or expiration time. 2. Endpoint: `PATCH /projects/{project_id}/branches/{branch_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch to update. 4. Body Parameters: `branch` (object, required): The container for the branch attributes to update. - `name` (string, optional): A new name for the branch (max 256 characters). - `protected` (boolean, optional): Set to `true` to protect the branch or `false` to unprotect it. - `expires_at` (string or null, optional): Set a new RFC 3339 expiration timestamp or `null` to remove the expiration. Example: Change branch name: ```bash curl -X 'PATCH' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-damp-glitter-adqd4hk5' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "branch": { "name": "updated-branch-name" } }' ``` Example response: ```json { "branch": { "id": "br-damp-glitter-adqd4hk5", "project_id": "hidden-river-50598307", "parent_id": "br-super-wildflower-adniii9u", "parent_lsn": "0/1A7F730", "parent_timestamp": "2025-09-10T12:15:05Z", "name": "updated-branch-name", "current_state": "ready", "state_changed_at": "2025-09-10T16:45:52Z", "logical_size": 30842880, "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 68, "compute_time_seconds": 68, "active_time_seconds": 268, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:55:30Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, "operations": [] } ``` ### Delete branch 1. Action: Deletes the specified branch from a project. This action will also place all associated compute endpoints into an idle state, breaking any active client connections. 2. Endpoint: `DELETE /projects/{project_id}/branches/{branch_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch to delete. 4. Constraints: - You cannot delete a project's root or default branch. - You cannot delete a branch that has child branches. You must delete all child branches first. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "branch": { "id": "br-damp-glitter-adqd4hk5", "project_id": "hidden-river-50598307", "parent_id": "br-super-wildflower-adniii9u", "parent_lsn": "0/1A7F730", "parent_timestamp": "2025-09-10T12:15:05Z", "name": "updated-branch-name", "current_state": "ready", "pending_state": "storage_deleted", "state_changed_at": "2025-09-10T16:45:52Z", "logical_size": 30842880, "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 68, "compute_time_seconds": 68, "active_time_seconds": 268, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-09-10T16:45:52Z", "updated_at": "2025-09-10T16:59:35Z", "created_by": { "name": "", "image": "" }, "init_source": "parent-data" }, "operations": [ { "id": "a1d314dc-2da2-421d-8b9a-6dc9fb5bb440", "project_id": "hidden-river-50598307", "branch_id": "br-damp-glitter-adqd4hk5", "endpoint_id": "ep-raspy-glade-ad8e3gvy", "action": "suspend_compute", "status": "running", "failures_count": 0, "created_at": "2025-09-10T16:59:35Z", "updated_at": "2025-09-10T16:59:35Z", "total_duration_ms": 0 }, { "id": "668b5854-8951-458c-a567-d265b4cadabe", "project_id": "hidden-river-50598307", "branch_id": "br-damp-glitter-adqd4hk5", "action": "delete_timeline", "status": "scheduling", "failures_count": 0, "created_at": "2025-09-10T16:59:35Z", "updated_at": "2025-09-10T16:59:35Z", "total_duration_ms": 0 } ] } ``` ### List branch endpoints 1. Action: Retrieves a list of all compute endpoints that are associated with a specific branch. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}/endpoints` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch whose endpoints you want to list. 4. A branch can have one `read_write` compute endpoint and multiple `read_only` endpoints. This method returns an array of all endpoints currently attached to the specified branch. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/endpoints' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoints": [ { "host": "ep-dry-cloud-admel5xy.c-2.us-east-1.aws.neon.tech", "id": "ep-dry-cloud-admel5xy", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2000-01-01T00:00:00Z", "creation_source": "console", "created_at": "2025-09-10T17:32:26Z", "updated_at": "2025-09-10T17:32:26Z", "started_at": "2025-09-10T17:32:26Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm", "compute_release_version": "9509" }, { "host": "ep-ancient-brook-ad5ea04d.c-2.us-east-1.aws.neon.tech", "id": "ep-ancient-brook-ad5ea04d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-10T12:15:06Z", "creation_source": "console", "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-10T12:35:33Z", "suspended_at": "2025-09-10T12:20:22Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ] } ``` ### Create database 1. Action: Creates a new database within a specified branch. A branch can contain multiple databases. 2. Endpoint: `POST /projects/{project_id}/branches/{branch_id}/databases` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch where the database will be created. 4. Body Parameters: `database` (object, required): The container for the new database's properties. - `name` (string, required): The name for the new database. - `owner_name` (string, required): The name of an existing role that will own the database. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/databases' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "database": { "name": "my_new_app_db", "owner_name": "app_owner_role" } }' ``` Example Response: ```json { "database": { "id": 9561265, "branch_id": "br-super-wildflower-adniii9u", "name": "my_new_app_db", "owner_name": "app_owner_role", "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z" }, "operations": [ { "id": "282aa443-d0a1-412c-8d09-8817bb8bbcdb", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z", "total_duration_ms": 0 } ] } ``` ### List databases 1. Action: Retrieves a list of all databases within a specified branch. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}/databases` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/databases' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "databases": [ { "id": 9512268, "branch_id": "br-super-wildflower-adniii9u", "name": "neondb", "owner_name": "neondb_owner", "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:14:58Z" }, { "id": 9561265, "branch_id": "br-super-wildflower-adniii9u", "name": "my_new_app_db", "owner_name": "app_owner_role", "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z" } ] } ``` ### Retrieve database details 1. Action: Retrieves detailed information about a specific database within a branch. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}/databases/{database_name}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. - `database_name` (string, required): The name of the database. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/databases/my_new_app_db' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "database": { "id": 9561265, "branch_id": "br-super-wildflower-adniii9u", "name": "my_new_app_db", "owner_name": "app_owner_role", "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z" } } ``` ### Update database 1. Action: Updates the properties of a specified database, such as its name or owner. 2. Endpoint: `PATCH /projects/{project_id}/branches/{branch_id}/databases/{database_name}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. - `database_name` (string, required): The current name of the database to update. 4. Body Parameters: `database` (object, required): The container for the database attributes to update. - `name` (string, optional): A new name for the database. - `owner_name` (string, optional): The name of a different existing role to become the new owner. Example: Change the owner of a database ```bash curl -X 'PATCH' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/databases/my_new_app_db' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "database": { "owner_name": "neondb_owner" } }' ``` Example Response: ```json { "database": { "id": 9561265, "branch_id": "br-super-wildflower-adniii9u", "name": "my_new_app_db", "owner_name": "neondb_owner", "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z" }, "operations": [ { "id": "f9db8971-2d71-4b3c-84fa-967b99150cb1", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-09-10T18:03:58Z", "updated_at": "2025-09-10T18:03:58Z", "total_duration_ms": 0 } ] } ``` ### Delete database 1. Action: Deletes the specified database from a branch. This action is permanent and cannot be undone. 2. Endpoint: `DELETE /projects/{project_id}/branches/{branch_id}/databases/{database_name}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. - `database_name` (string, required): The name of the database to delete. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/databases/my_new_app_db' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "database": { "id": 9561265, "branch_id": "br-super-wildflower-adniii9u", "name": "my_new_app_db", "owner_name": "neondb_owner", "created_at": "2025-09-10T17:50:07Z", "updated_at": "2025-09-10T17:50:07Z" }, "operations": [ { "id": "f2a5fb2d-688c-4851-905f-781f6a338f2f", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-09-10T18:05:14Z", "updated_at": "2025-09-10T18:05:14Z", "total_duration_ms": 0 } ] } ``` ### Create role 1. Action: Creates a new Postgres role in a specified branch. This action may drop existing connections to the active compute endpoint. 2. Endpoint: `POST /projects/{project_id}/branches/{branch_id}/roles` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch where the role will be created. 4. Body Parameters: `role` (object, required): The container for the new role's properties. - `name` (string, required): The name for the new role. Cannot exceed 63 bytes in length. - `no_login` (boolean, optional): If `true`, creates a role that cannot be used to log in. Defaults to `false`. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "new_app_user" } }' ``` Example Response: ```json { "role": { "branch_id": "br-super-wildflower-adniii9u", "name": "new_app_user", "password": "npg_BYgz0val8xuR", "protected": false, "created_at": "2025-09-11T05:50:21Z", "updated_at": "2025-09-11T05:50:21Z" }, "operations": [ { "id": "65d049fa-b659-4d2b-8c02-ad1ebeb552fc", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-09-11T05:50:21Z", "updated_at": "2025-09-11T05:50:21Z", "total_duration_ms": 0 } ] } ``` ### List roles 1. Action: Retrieves a list of all Postgres roles from the specified branch. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}/roles` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "roles": [ { "branch_id": "br-super-wildflower-adniii9u", "name": "neondb_owner", "protected": false, "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-10T12:14:58Z" }, { "branch_id": "br-super-wildflower-adniii9u", "name": "new_app_user", "protected": false, "created_at": "2025-09-11T05:50:21Z", "updated_at": "2025-09-11T05:50:21Z" } ] } ``` ### Retrieve role details 1. Action: Retrieves detailed information about a specific Postgres role within a branch. 2. Endpoint: `GET /projects/{project_id}/branches/{branch_id}/roles/{role_name}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. - `role_name` (string, required): The name of the role. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/roles/new_app_user' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "role": { "branch_id": "br-super-wildflower-adniii9u", "name": "new_app_user", "protected": false, "created_at": "2025-09-11T05:50:21Z", "updated_at": "2025-09-11T05:50:21Z" } } ``` ### Delete role 1. Action: Deletes the specified Postgres role from the branch. This action is permanent. 2. Endpoint: `DELETE /projects/{project_id}/branches/{branch_id}/roles/{role_name}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `branch_id` (string, required): The unique identifier of the branch. - `role_name` (string, required): The name of the role to delete. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/branches/br-super-wildflower-adniii9u/roles/new_app_user' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "role": { "branch_id": "br-super-wildflower-adniii9u", "name": "new_app_user", "protected": false, "created_at": "2025-09-11T05:50:21Z", "updated_at": "2025-09-11T05:50:21Z" }, "operations": [ { "id": "0e910f98-dcd2-445f-aaf4-729476a30492", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-09-11T05:58:00Z", "updated_at": "2025-09-11T05:58:00Z", "total_duration_ms": 0 } ] } ``` ```` ## Neon API Rules: Manage Compute Endpoints Save the following content to a file named `neon-api-endpoints.mdc` in your AI tool's rules directory. ````md --- description: Use these rules to manage compute endpoints associated with branches in a project. alwaysApply: false --- ## Overview This section provides rules for managing compute endpoints associated with branches in a project. Compute endpoints are Neon compute instances that allow you to connect to and interact with your databases. ## Manage compute endpoints ### Create compute endpoint 1. Action: Creates a new compute endpoint (a Neon compute instance) and associates it with a specified branch. 2. Endpoint: `POST /projects/{project_id}/endpoints` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. 4. Body Parameters: `endpoint` (object, required): The container for the new endpoint's properties. - `branch_id` (string, required): The ID of the branch to associate the endpoint with. - `type` (string, required): The endpoint type. A branch can have only one `read_write` endpoint but multiple `read_only` endpoints. Allowed values: `read_write`, `read_only`. - `region_id` (string, optional): The region where the endpoint will be created. Must match the project's region. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum `0.25`. - `provisioner` (string, optional): The compute provisioner. Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before suspending the compute. Ranges from -1 (never suspend) to 604800 (1 week). - `disabled` (boolean, optional): If `true`, restricts connections to the endpoint. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoint": { "branch_id": "br-your-branch-id", "type": "read_only" } }' ``` Example Response: ```json { "endpoint": { "host": "ep-proud-mud-adwmnxz4.c-2.us-east-1.aws.neon.tech", "id": "ep-proud-mud-adwmnxz4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:25:12Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [ { "id": "4d10642f-5212-4517-ad60-afd28c9096e2", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-proud-mud-adwmnxz4", "action": "start_compute", "status": "running", "failures_count": 0, "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:25:12Z", "total_duration_ms": 0 } ] } ``` ### List compute endpoints 1. Action: Retrieves a list of all compute endpoints for the specified project. 2. Endpoint: `GET /projects/{project_id}/endpoints` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoints": [ { "host": "ep-round-morning-adtpn2oc.c-2.us-east-1.aws.neon.tech", "id": "ep-round-morning-adtpn2oc", "project_id": "hidden-river-50598307", "branch_id": "br-long-feather-adpbgzlx", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-11T06:28:33Z", "creation_source": "console", "created_at": "2025-09-10T12:14:58Z", "updated_at": "2025-09-11T06:28:34Z", "started_at": "2025-09-11T06:28:23Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm", "compute_release_version": "9509" }, { "host": "ep-ancient-brook-ad5ea04d.c-2.us-east-1.aws.neon.tech", "id": "ep-ancient-brook-ad5ea04d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-11T06:28:26Z", "creation_source": "console", "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-11T06:28:34Z", "started_at": "2025-09-11T06:28:20Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm", "compute_release_version": "9509" }, { "host": "ep-proud-mud-adwmnxz4.c-2.us-east-1.aws.neon.tech", "id": "ep-proud-mud-adwmnxz4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2000-01-01T00:00:00Z", "creation_source": "console", "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:30:26Z", "suspended_at": "2025-09-11T06:30:26Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ] } ``` ### Retrieve compute endpoint details 1. Action: Retrieves detailed information about a specific compute endpoint, including its configuration (e.g., autoscaling limits), current state (`active` or `idle`), and associated branch ID. 2. Endpoint: `GET /projects/{project_id}/endpoints/{endpoint_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint). Example Request: ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-proud-mud-adwmnxz4' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoint": { "host": "ep-proud-mud-adwmnxz4.c-2.us-east-1.aws.neon.tech", "id": "ep-proud-mud-adwmnxz4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2000-01-01T00:00:00Z", "creation_source": "console", "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:30:26Z", "suspended_at": "2025-09-11T06:30:26Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } } ``` ### Update compute endpoint 1. Action: Updates the configuration of a specified compute endpoint. 2. Endpoint: `PATCH /projects/{project_id}/endpoints/{endpoint_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint. 4. Body Parameters: `endpoint` (object, required): The container for the endpoint attributes to update. - `autoscaling_limit_min_cu` (number, optional): A new minimum number of Compute Units (CU). - `autoscaling_limit_max_cu` (number, optional): A new maximum number of Compute Units (CU). - `suspend_timeout_seconds` (integer, optional): A new inactivity period in seconds before suspension. - `disabled` (boolean, optional): Set to `true` to disable connections or `false` to enable them. - `provisioner` (string, optional): Change the compute provisioner. Example: Update autoscaling limits ```bash curl -X 'PATCH' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-proud-mud-adwmnxz4' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoint": { "autoscaling_limit_min_cu": 0.5, "autoscaling_limit_max_cu": 1 } }' ``` Example Response: ```json { "endpoint": { "host": "ep-proud-mud-adwmnxz4.c-2.us-east-1.aws.neon.tech", "id": "ep-proud-mud-adwmnxz4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.5, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2000-01-01T00:00:00Z", "creation_source": "console", "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:37:48Z", "suspended_at": "2025-09-11T06:30:26Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [] } ``` ### Delete compute endpoint 1. Action: Deletes the specified compute endpoint. This action drops any existing network connections to the endpoint. 2. Endpoint: `DELETE /projects/{project_id}/endpoints/{endpoint_id}` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint to delete. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-proud-mud-adwmnxz4' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoint": { "host": "ep-proud-mud-adwmnxz4.c-2.us-east-1.aws.neon.tech", "id": "ep-proud-mud-adwmnxz4", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.5, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_only", "current_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2000-01-01T00:00:00Z", "creation_source": "console", "created_at": "2025-09-11T06:25:12Z", "updated_at": "2025-09-11T06:41:22Z", "suspended_at": "2025-09-11T06:30:26Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [] } ``` ### Start compute endpoint 1. Action: Manually starts a compute endpoint that is currently in an `idle` state. The endpoint is ready for connections once the start operation completes successfully. 2. Endpoint: `POST /projects/{project_id}/endpoints/{endpoint_id}/start` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint. Example Request: ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-ancient-brook-ad5ea04d/start' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoint": { "host": "ep-ancient-brook-ad5ea04d.c-2.us-east-1.aws.neon.tech", "id": "ep-ancient-brook-ad5ea04d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "idle", "pending_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-11T06:28:26Z", "creation_source": "console", "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-11T06:51:25Z", "suspended_at": "2025-09-11T06:34:31Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [ { "id": "d4324b7e-0d73-467b-bc61-2f743a0c204b", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "start_compute", "status": "running", "failures_count": 0, "created_at": "2025-09-11T07:51:18Z", "updated_at": "2025-09-11T07:51:18Z", "total_duration_ms": 0 } ] } ``` ### Suspend compute endpoint 1. Action: Manually suspends an `active` compute endpoint, forcing it into an `idle` state. This will immediately drop any active connections to the endpoint. 2. Endpoint: `POST /projects/{project_id}/endpoints/{endpoint_id}/suspend` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint. Example Request: ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-ancient-brook-ad5ea04d/suspend' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoint": { "host": "ep-ancient-brook-ad5ea04d.c-2.us-east-1.aws.neon.tech", "id": "ep-ancient-brook-ad5ea04d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "active", "pending_state": "idle", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-11T07:51:19Z", "creation_source": "console", "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-11T07:51:30Z", "started_at": "2025-09-11T07:51:18Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [ { "id": "a100287d-203f-4b89-9c08-292ff70dfd8c", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "suspend_compute", "status": "running", "failures_count": 0, "created_at": "2025-09-11T07:52:50Z", "updated_at": "2025-09-11T07:52:50Z", "total_duration_ms": 0 } ] } ``` ### Restart compute endpoint 1. Action: Restarts the specified compute endpoint. This involves an immediate suspend operation followed by a start operation. This is useful for applying configuration changes or refreshing the compute instance. All active connections will be dropped. 2. Endpoint: `POST /projects/{project_id}/endpoints/{endpoint_id}/restart` 3. Path Parameters: - `project_id` (string, required): The unique identifier of the project. - `endpoint_id` (string, required): The unique identifier of the compute endpoint. Example Request: ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/projects/hidden-river-50598307/endpoints/ep-ancient-brook-ad5ea04d/restart' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Example Response: ```json { "endpoint": { "host": "ep-ancient-brook-ad5ea04d.c-2.us-east-1.aws.neon.tech", "id": "ep-ancient-brook-ad5ea04d", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "active", "pending_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-09-11T07:51:19Z", "creation_source": "console", "created_at": "2025-09-10T12:15:04Z", "updated_at": "2025-09-11T07:53:07Z", "started_at": "2025-09-11T07:53:07Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [ { "id": "e0cdbc42-1f8a-4368-a19b-ab04628c6a89", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "suspend_compute", "status": "running", "failures_count": 0, "created_at": "2025-09-11T07:54:00Z", "updated_at": "2025-09-11T07:54:00Z", "total_duration_ms": 0 }, { "id": "b5f7d061-e8d3-4bcc-aa86-d00f5ee5f7b6", "project_id": "hidden-river-50598307", "branch_id": "br-super-wildflower-adniii9u", "endpoint_id": "ep-ancient-brook-ad5ea04d", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-09-11T07:54:00Z", "updated_at": "2025-09-11T07:54:00Z", "total_duration_ms": 0 } ] } ``` ```` ## Neon API Rules: Manage Organizations Save the following content to a file named `neon-api-organizations.mdc` in your AI tool's rules directory. ````md --- title: Use these rules to manage organizations, their members, invitations, and organization API keys. alwaysApply: false --- ## Overview This section provides rules for managing organizations, their members, invitations, and organization API keys. Organizations allow multiple users to collaborate on projects and share resources within Neon. ## Manage organizations ### Retrieve organization details 1. Action: Retrieves detailed information about a specific organization. 2. Endpoint: `GET /organizations/{org_id}` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. Example Request: ```bash curl 'https://console.neon.tech/api/v2/organizations/{org_id}' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### List organization members 1. Action: Retrieves a list of all members belonging to the specified organization. 2. Endpoint: `GET /organizations/{org_id}/members` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. Example Request: ```bash curl 'https://console.neon.tech/api/v2/organizations/{org_id}/members' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### Retrieve organization member details 1. Action: Retrieves information about a specific member of an organization. 2. Endpoint: `GET /organizations/{org_id}/members/{member_id}` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. - `member_id` (UUID, required): The unique identifier of the organization member. Example Request: ```bash curl 'https://console.neon.tech/api/v2/organizations/{org_id}/members/{member_id}' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### Update role for organization member 1. Action: Updates the role of a specified member within an organization. 2. Prerequisite: This action can only be performed by an organization `admin`. 3. Endpoint: `PATCH /organizations/{org_id}/members/{member_id}` 4. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. - `member_id` (UUID, required): The unique identifier of the organization member. 5. Body Parameters: - `role` (string, required): The new role for the member. Allowed values: `admin`, `member`. Example: Change a member's role to admin ```bash curl -X 'PATCH' \ 'https://console.neon.tech/api/v2/organizations/{org_id}/members/{member_id}' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{"role": "admin"}' ``` ### Remove member from organization 1. Action: Removes a specified member from an organization. 2. Prerequisites: - This action can only be performed by an organization `admin`. - An admin cannot be removed if they are the only admin left in the organization. 3. Endpoint: `DELETE /organizations/{org_id}/members/{member_id}` 4. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. - `member_id` (UUID, required): The unique identifier of the organization member to remove. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/organizations/{org_id}/members/{member_id}' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### Create organization invitations 1. Action: Creates and sends one or more email invitations for users to join a specific organization. 2. Endpoint: `POST /organizations/{org_id}/invitations` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. 4. Body Parameters: `invitations` (array of objects, required): A list of invitations to create. - `email` (string, required): The email address of the user to invite. - `role` (string, required): The role the invited user will have. Allowed values: `admin`, `member`. Example: Invite two users with different roles ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/organizations/{org_id}/invitations' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "invitations": [ { "email": "developer@example.com", "role": "member" }, { "email": "manager@example.com", "role": "admin" } ] }' ``` ### List organization invitations 1. Action: Retrieves information about outstanding invitations for the specified organization. 2. Endpoint: `GET /organizations/{org_id}/invitations` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. Example Request: ```bash curl 'https://console.neon.tech/api/v2/organizations/{org_id}/invitations' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### Create organization API key 1. Action: Creates a new API key for the specified organization. The key can be scoped to the entire organization or limited to a single project within it. 2. Endpoint: `POST /organizations/{org_id}/api_keys` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. 4. Body Parameters: - `key_name` (string, required): A user-specified name for the API key (max 64 characters). - `project_id` (string, optional): If provided, the API key's access will be restricted to only this project. 5. Authorization: Use a Personal API Key of an organization `admin` to create organization API keys. Example: Create a project-scoped API key ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $PERSONAL_API_KEY_OF_ADMIN" \ -H 'Content-Type: application/json' \ -d '{ "key_name": "ci-pipeline-key-for-project-x", "project_id": "project-id-123" }' ``` ### List organization API keys 1. Action: Retrieves a list of all API keys created for the specified organization. 2. Endpoint: `GET /organizations/{org_id}/api_keys` 3. Note: The response includes metadata about the keys (like `id` and `name`) but does not include the secret key tokens themselves. Tokens are only visible upon creation. 4. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. Example Request: ```bash curl 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ### Revoke organization API key 1. Action: Permanently revokes the specified organization API key. 2. Endpoint: `DELETE /organizations/{org_id}/api_keys/{key_id}` 3. Path Parameters: - `org_id` (string, required): The unique identifier of the organization. - `key_id` (integer, required): The unique identifier of the API key to revoke. You can obtain this ID by listing the organization's API keys. Example Request: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys/{key_id}' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-auth.txt # AI Rules: Neon Auth > The "AI Rules: Neon Auth" document outlines the authentication rules and procedures for integrating AI capabilities within the Neon platform, detailing the necessary configurations and security protocols for seamless AI service access. ## Source - [AI Rules: Neon Auth HTML](https://neon.com/docs/ai/ai-rules-neon-auth): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Neon Auth](https://neon.com/docs/guides/neon-auth) Repository: - [READ ME](https://github.com/neondatabase-labs/ai-rules) - [neon-auth.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-auth.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-auth.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon Auth code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use these rules to relate your database data with your Auth users information globs: *.tsx, *.ts alwaysApply: false --- # Neon Auth guidelines ## The Problem Neon Auth Solves Neon Auth integrates user authentication directly with your Neon Postgres database. Its primary purpose is to **eliminate the complexity of synchronizing user data** between your authentication provider and your application's database. - **Before Neon Auth:** Developers need to build and maintain custom sync logic, webhooks, and separate user tables to handle user creation, updates, and deletions. This is error-prone and adds overhead. - **With Neon Auth:** User data is automatically populated and updated in near real-time within a dedicated `neon_auth.users_sync` table in your database. This allows you to treat user profiles as regular database rows, ready for immediate use in SQL joins and application logic. ## The Two Halves of Neon Auth Think of Neon Auth as a unified system with two main components: 1. **The Authentication Layer (SDK):** This is for managing user sessions, sign-ins, sign-ups, and accessing user information in your application code (client and server components). It is powered by the Stack Auth SDK (`@stackframe/stack`). 2. **The Database Layer (Data Sync):** This is the `neon_auth.users_sync` table within your Neon database. It serves as a near real-time, read-only replica of your user data, ready to be joined with your application's tables. ## Stack Auth Setup Guidelines ### Initial Setup Ask the human developer to do the following steps: - Enable Neon Auth: In the Neon project console, navigate to the Auth page and click Enable Neon Auth. - Get Credentials: Go to the Configuration tab and copy the environment variables. Steps which you can do after that: - Run the installation wizard with: `npx @stackframe/init-stack@latest --no-browser` - Update the API keys in your `.env.local` file with the values from the Neon console: - `NEXT_PUBLIC_STACK_PROJECT_ID` - `NEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY` - `STACK_SECRET_SERVER_KEY` - Key files created/updated include: - `app/handler/[...stack]/page.tsx` (default auth pages) - `app/layout.tsx` (wrapped with StackProvider and StackTheme) - `app/loading.tsx` (provides a Suspense fallback) - `stack/server.tsx` (initializes your Stack server app) - `stack/client.tsx` (initializes your Stack client app) ### UI Components - Use pre-built components from `@stackframe/stack` like ``, ``, and `` to quickly set up auth UI. - You can also compose smaller pieces like ``, ``, and `` for custom flows. - Example: ```tsx import { SignIn } from '@stackframe/stack'; export default function Page() { return ; } ``` ### User Management - In Client Components, use the `useUser()` hook to retrieve the current user (it returns `null` when not signed in). - Update user details using `user.update({...})` and sign out via `user.signOut()`. ### Client Component Integration - Client Components rely on hooks like `useUser()` and `useStackApp()`. - Example: ```tsx "use client"; import { useUser } from "@stackframe/stack"; export function MyComponent() { const user = useUser(); return
{user ? `Hello, ${user.displayName}` : "Not logged in"}
; } ``` ### Server Component Integration - For Server Components, use `stackServerApp.getUser()` from `stack/server.tsx` file. - Example: ```tsx import { stackServerApp } from "@/stack/server"; export default async function ServerComponent() { const user = await stackServerApp.getUser(); return
{user ? `Hello, ${user.displayName}` : "Not logged in"}
; } ``` ### Page Protection - Protect pages by redirecting to Sign in page: - Using `useUser({ or: "redirect" })` in Client Components. - Using `await stackServerApp.getUser({ or: "redirect" })` in Server Components. - Implementing middleware that checks for a user and redirects to `/handler/sign-in` if not found. - Example middleware: ```tsx export async function middleware(request: NextRequest) { const user = await stackServerApp.getUser(); if (!user) { return NextResponse.redirect(new URL('/handler/sign-in', request.url)); } return NextResponse.next(); } export const config = { matcher: '/protected/:path*' }; ``` ## Stack Auth SDK Reference The Stack Auth SDK provides several types and methods: ```tsx type StackClientApp = { new(options): StackClientApp; getUser([options]): Promise; useUser([options]): User; getProject(): Promise; useProject(): Project; signInWithOAuth(provider): void; signInWithCredential([options]): Promise<...>; signUpWithCredential([options]): Promise<...>; sendForgotPasswordEmail(email): Promise<...>; sendMagicLinkEmail(email): Promise<...>; }; type StackServerApp = & StackClientApp & { new(options): StackServerApp; getUser([id][, options]): Promise; useUser([id][, options]): ServerUser; listUsers([options]): Promise; useUsers([options]): ServerUser[]; createUser([options]): Promise; getTeam(id): Promise; useTeam(id): ServerTeam; listTeams(): Promise; useTeams(): ServerTeam[]; createTeam([options]): Promise; } type CurrentUser = { id: string; displayName: string | null; primaryEmail: string | null; primaryEmailVerified: boolean; profileImageUrl: string | null; signedUpAt: Date; hasPassword: boolean; clientMetadata: Json; clientReadOnlyMetadata: Json; selectedTeam: Team | null; update(data): Promise; updatePassword(data): Promise; getAuthHeaders(): Promise>; getAuthJson(): Promise<{ accessToken: string | null }>; signOut([options]): Promise; delete(): Promise; getTeam(id): Promise; useTeam(id): Team | null; listTeams(): Promise; useTeams(): Team[]; setSelectedTeam(team): Promise; createTeam(data): Promise; leaveTeam(team): Promise; getTeamProfile(team): Promise; useTeamProfile(team): EditableTeamMemberProfile; hasPermission(scope, permissionId): Promise; getPermission(scope, permissionId[, options]): Promise; usePermission(scope, permissionId[, options]): TeamPermission | null; listPermissions(scope[, options]): Promise; usePermissions(scope[, options]): TeamPermission[]; listContactChannels(): Promise; useContactChannels(): ContactChannel[]; }; ``` ## Stack Auth Best Practices - Use the appropriate methods based on component type: - Use hook-based methods (`useXyz`) in Client Components - Use promise-based methods (`getXyz`) in Server Components - Always protect sensitive routes using the provided mechanisms - Use pre-built UI components whenever possible to ensure proper auth flow handling ## Neon Auth Database Integration ### Database Schema Neon Auth creates and manages a schema in your database that stores user information: - **Schema Name**: `neon_auth` - **Primary Table**: `users_sync` - **Table Structure**: - `raw_json` (JSONB, NOT NULL): Complete user data in JSON format - `id` (TEXT, NOT NULL, PRIMARY KEY): Unique user identifier - `name` (TEXT, NULLABLE): User's display name - `email` (TEXT, NULLABLE): User's email address - `created_at` (TIMESTAMP WITH TIME ZONE, NULLABLE): When the user was created - `updated_at` (TIMESTAMP WITH TIME ZONE, NULLABLE): When the user was last updated - `deleted_at` (TIMESTAMP WITH TIME ZONE, NULLABLE): When the user was deleted (if applicable) - **Indexes**: - `users_sync_deleted_at_idx` on `deleted_at`: For quickly identifying deleted users > NOTE: The table is automatically created and managed by Neon Auth. Do not manually create or modify it. This is provided for your reference only. ### Database Usage #### Querying Active Users The `deleted_at` column is used for soft deletes. Always include `WHERE deleted_at IS NULL` in your queries to ensure you only work with active user accounts. ```sql SELECT * FROM neon_auth.users_sync WHERE deleted_at IS NULL; ``` #### Relating User Data with Application Tables To join user data with your application tables: ```sql SELECT t.*, u.id AS user_id, u.name AS user_name, u.email AS user_email FROM public.todos t LEFT JOIN neon_auth.users_sync u ON t.owner = u.id WHERE u.deleted_at IS NULL ORDER BY t.id; ``` ## Integration Flow 1. User authentication happens via Stack Auth UI components 2. User data is automatically synced to the `neon_auth.users_sync` table 3. Your application code accesses user information either through: - Stack Auth hooks/methods (in React components) - SQL queries to the `neon_auth.users_sync` table (for read only data operations) ## Best Practices for Integration - **The `users_sync` Table is a Read-Only Replica**: User data is managed by the Neon Auth service. **NEVER** `INSERT`, `UPDATE`, or `DELETE` rows directly in the `neon_auth.users_sync` table. All user modifications must happen through the Authentication Layer SDK (e.g., `user.update({...})`, `user.delete()`). Direct database modifications will be overwritten and can break the sync process. - **Use Foreign Keys Correctly**: You **SHOULD** create foreign key constraints from your application tables *to* the `neon_auth.users_sync(id)` column. This maintains referential integrity. Do **NOT** attempt to add foreign keys *from* the `users_sync` table to your own tables. ```sql -- CORRECT: Your table references the Neon Auth table. CREATE TABLE posts ( id SERIAL PRIMARY KEY, content TEXT, author_id TEXT NOT NULL REFERENCES neon_auth.users_sync(id) ON DELETE CASCADE ); -- INCORRECT: Do not try to alter the Neon Auth table. Will break entire Neon Auth system. -- ALTER TABLE neon_auth.users_sync ADD CONSTRAINT ... ``` ## Example: Custom Profile Page with Database Integration ### Frontend Component ```tsx 'use client'; import { useUser, useStackApp, UserButton } from '@stackframe/stack'; export default function ProfilePage() { const user = useUser({ or: "redirect" }); const app = useStackApp(); return (

Welcome, {user.displayName || "User"}

Email: {user.primaryEmail}

); } ``` ### Database Query for User's Content ```sql -- Get all todos for the currently logged in user SELECT t.* FROM public.todos t LEFT JOIN neon_auth.users_sync u ON t.owner = u.id WHERE u.id = $current_user_id AND u.deleted_at IS NULL ORDER BY t.created_at DESC; ``` ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-drizzle.txt # AI Rules: Neon with Drizzle > The document "AI Rules: Neon with Drizzle" outlines the integration of AI-driven rules within the Neon platform using Drizzle, detailing the setup and configuration processes for implementing automated decision-making workflows. ## Source - [AI Rules: Neon with Drizzle HTML](https://neon.com/docs/ai/ai-rules-neon-drizzle): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Get started with Drizzle and Neon](https://orm.drizzle.team/docs/get-started/neon-new) Repository: - [README](https://github.com/neondatabase-labs/ai-rules) - [neon-drizzle.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-drizzle.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-drizzle.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon with Drizzle code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use this rules when integrating Neon (serverless Postgres) with Drizzle ORM globs: *.ts, *.tsx alwaysApply: false --- # Neon and Drizzle Integration Guidelines ## Overview This guide covers the specific integration patterns, configurations, and optimizations for using **Drizzle ORM** with **Neon** Postgres. Follow these guidelines to ensure efficient, secure, and robust database operations in serverless and traditional environments. ## Dependencies For Neon with Drizzle ORM integration, include these specific dependencies. The `ws` package is required for persistent WebSocket connections in Node.js environments older than v22. ```bash npm install drizzle-orm @neondatabase/serverless ws npm install -D drizzle-kit dotenv @types/ws ``` ## Neon Connection String Always use the Neon connection string format and store it in an environment file (`.env`, `.env.local`). ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` ## Connection Setup: Choosing the Right Driver Adapter Neon's serverless driver offers two connection methods: HTTP and WebSocket. Drizzle has a specific adapter for each. ### 1. HTTP Adapter (Recommended for Serverless/Edge) This method is ideal for short-lived, stateless environments like Vercel Edge Functions or AWS Lambda. It uses `fetch` for each query, resulting in very low latency for single operations. - Use the `neon` client from `@neondatabase/serverless`. - Use the `drizzle` adapter from `drizzle-orm/neon-http`. ```typescript // src/db.ts import { drizzle } from "drizzle-orm/neon-http"; import { neon } from "@neondatabase/serverless"; import { config } from "dotenv"; config({ path: ".env" }); if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL is not defined'); } const sql = neon(process.env.DATABASE_URL); export const db = drizzle(sql); ``` ### 2. WebSocket Adapter (for `node-postgres` compatibility) This method is suitable for long-running applications (e.g., a standard Node.js server) or when you need support for interactive transactions. It maintains a persistent WebSocket connection. - Use the `Pool` client from `@neondatabase/serverless`. - Use the `drizzle` adapter from `drizzle-orm/neon-serverless`. - Configure the WebSocket constructor for Node.js environments older than v22. ```typescript // src/db.ts import { drizzle } from 'drizzle-orm/neon-serverless'; import { Pool, neonConfig } from '@neondatabase/serverless'; import { config } from "dotenv"; import ws from 'ws'; config({ path: ".env" }); if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL is not defined'); } // Required for Node.js < v22 neonConfig.webSocketConstructor = ws; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); export const db = drizzle(pool); ``` ## Drizzle Config for Neon Configure `drizzle.config.ts` to manage your schema and migrations. Neon is fully Postgres-compatible, so the dialect is `postgresql`. ```typescript // drizzle.config.ts import { config } from 'dotenv'; import { defineConfig } from "drizzle-kit"; config({ path: '.env.local' }); // Use .env.local for local dev export default defineConfig({ schema: "./src/schema.ts", out: "./drizzle", // Or your preferred migrations folder dialect: "postgresql", dbCredentials: { url: process.env.DATABASE_URL!, } }); ``` ## Migrations with Drizzle Kit `drizzle-kit` is used to generate and apply schema changes to your Neon database. ### 1. Generate Migrations After changing your schema in `src/schema.ts`, generate a new migration file. ```bash npx drizzle-kit generate ``` This command reads your `drizzle.config.ts`, compares your schema to the database state, and creates SQL files in your output directory (`./drizzle`). ### 2. Apply Migrations You can apply migrations via the command line or programmatically. **Command Line:** ```bash npx drizzle-kit migrate ``` ## Schema Considerations for Neon ### Standard Postgres Schema Define your schema using Postgres-specific types from `drizzle-orm/pg-core`. ```typescript // src/schema.ts import { pgTable, serial, text, integer, timestamp } from 'drizzle-orm/pg-core'; export const usersTable = pgTable('users', { id: serial('id').primaryKey(), name: text('name').notNull(), email: text('email').notNull().unique(), role: text('role').default('user').notNull(), createdAt: timestamp('created_at').defaultNow().notNull(), }); // Export types for type safety export type User = typeof usersTable.$inferSelect; export type NewUser = typeof usersTable.$inferInsert; // Example posts table for relationship demonstrations export const postsTable = pgTable('posts', { id: serial('id').primaryKey(), title: text('title').notNull(), content: text('content').notNull(), userId: integer('user_id').notNull().references(() => usersTable.id, { onDelete: 'cascade' }), createdAt: timestamp('created_at').defaultNow().notNull(), }); export type Post = typeof postsTable.$inferSelect; export type NewPost = typeof postsTable.$inferInsert; ``` ### Integrating the Neon Auth `users_sync` Table Neon Auth automatically synchronizes user data to a `neon_auth.users_sync` table. Drizzle provides a dedicated helper to integrate this seamlessly. - **Use the `usersSync` helper:** Import `usersSync` from `drizzle-orm/neon` to represent the table without defining its columns manually. - **Create foreign key relationships:** Link your application tables to `usersSync.id` to enforce data integrity. ```typescript // src/schema.ts import { pgTable, text, bigint, boolean, timestamp } from 'drizzle-orm/pg-core'; // Import the dedicated Neon Auth helper from Drizzle import { usersSync } from 'drizzle-orm/neon'; import { eq } from 'drizzle-orm'; // Example: A `todos` table where each todo belongs to a Neon Auth user export const todos = pgTable('todos', { id: bigint('id', { mode: 'bigint' }).primaryKey().generatedByDefaultAsIdentity(), task: text('task').notNull(), isComplete: boolean('is_complete').default(false).notNull(), insertedAt: timestamp('inserted_at').defaultNow().notNull(), // Create a foreign key relationship to the users_sync table ownerId: text('owner_id') .notNull() .references(() => usersSync.id, { onDelete: 'cascade' }), }); // This allows for direct SQL joins with user data async function getTodosWithUserEmails(db) { return db .select({ task: todos.task, ownerEmail: usersSync.email }) .from(todos) .leftJoin(usersSync, eq(todos.ownerId, usersSync.id)); } ``` ## Neon-Specific Query Optimizations ### Efficient Queries for Serverless Optimize for Neon's serverless environment: - Keep connections short-lived - Use prepared statements for repeated queries - Batch operations when possible ```typescript // Example of optimized query for Neon import { db } from '../db'; import { sql } from 'drizzle-orm'; import { usersTable } from '../schema'; export async function batchInsertUsers(users: NewUser[]) { // More efficient than multiple individual inserts on Neon return db.insert(usersTable).values(users).returning(); } // For complex queries, use prepared statements export const getUsersByRolePrepared = db.select() .from(usersTable) .where(sql`${usersTable.role} = $1`) .prepare('get_users_by_role'); // Usage: getUsersByRolePrepared.execute(['admin']) ``` ### Transaction Handling with Neon Neon supports transactions through Drizzle: ```typescript import { db } from '../db'; import { usersTable, postsTable } from '../schema'; export async function createUserWithPosts(user: NewUser, posts: NewPost[]) { return await db.transaction(async (tx) => { const [newUser] = await tx.insert(usersTable).values(user).returning(); if (posts.length > 0) { await tx.insert(postsTable).values( posts.map(post => ({ ...post, userId: newUser.id })) ); } return newUser; }); } ``` ## Working with Neon Branches A key feature of Neon is database branching. You can create isolated copies of your database for development, testing, or preview environments. Manage connections to these branches using environment variables. Here is a common pattern for setting up your database client to connect to different branches based on the environment: ```typescript // Using different Neon branches with environment variables import { drizzle } from "drizzle-orm/neon-http"; import { neon } from "@neondatabase/serverless"; // For multi-branch setup const getBranchUrl = () => { const env = process.env.NODE_ENV; if (env === 'development') { return process.env.DEV_DATABASE_URL; } else if (env === 'test') { return process.env.TEST_DATABASE_URL; } return process.env.DATABASE_URL; }; const sql = neon(getBranchUrl()!); export const db = drizzle({ client: sql }); ``` ## Neon-Specific Error Handling Handle Neon-specific connection issues: ```typescript import { db } from '../db'; import { usersTable } from '../schema'; export async function safeNeonOperation(operation: () => Promise): Promise { try { return await operation(); } catch (error: any) { // Handle Neon-specific error codes if (error.message?.includes('connection pool timeout')) { console.error('Neon connection pool timeout'); // Handle appropriately } // Re-throw for other handling throw error; } } // Usage export async function getUserSafely(id: number) { return safeNeonOperation(() => db.select().from(usersTable).where(eq(usersTable.id, id)) ); } ``` ## Best Practices for Neon with Drizzle 1. **Connection Management** - Keep connection times short for serverless functions - Use connection pooling for high traffic applications 2. **Neon Features** - Utilize Neon branching for development and testing - Consider Neon's auto-scaling for database design 3. **Query Optimization** - Batch operations when possible - Use prepared statements for repeated queries - Optimize complex joins to minimize data transfer 4. **Schema Design** - Leverage Postgres-specific features supported by Neon - Use appropriate indexes for your query patterns - Consider Neon's performance characteristics for large tables ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-python-sdk.txt # AI Rules: Neon Python SDK > The document details the AI Rules for the Neon Python SDK, outlining how to implement and manage AI-driven functionalities within Neon applications using Python. ## Source - [AI Rules: Neon Python SDK HTML](https://neon.com/docs/ai/ai-rules-neon-python-sdk): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Get started with Neon Python SDK](https://neon.com/docs/reference/python-sdk) - [Neon API Reference](https://neon.com/docs/reference/api-reference) Repository: - [Github Repo](https://github.com/neondatabase/neon-api-python) - [Python SDK Docs](https://neon-api-python.readthedocs.io/en/latest/) - [neon-python-sdk.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-python-sdk.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-python-sdk.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon Python SDK code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use these rules to manage your Neon projects, branches, databases, and other resources programmatically using the Neon Python SDK. globs: *.py alwaysApply: false --- This file provides comprehensive rules and best practices for interacting with the Neon API using the `neon-api` Python SDK. Following these guidelines will enable an AI agent like you to build robust, efficient, and error-tolerant integrations with Neon. The SDK is a Pythonic wrapper around the Neon REST API and provides methods for managing all Neon resources, including projects, branches, endpoints, roles, and databases. ### Neon Core Concepts To effectively use the Neon Python SDK, it's essential to understand the hierarchy and purpose of its core resources. The following table provides a high-level overview of each concept. | Concept | Description | Analogy/Purpose | Key Relationship | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | Organization | The highest-level container, managing billing, users, and multiple projects. | A GitHub Organization or a company's cloud account. | Contains one or more Projects. | | Project | The primary container that contains all related database resources for a single application or service. | A Git repository or a top-level folder for an application. | Lives within an Organization (or a personal account). Contains Branches. | | Branch | A lightweight, copy-on-write clone of a database's state at a specific point in time. | A `git branch`. Used for isolated development, testing, staging, or previews without duplicating storage costs. | Belongs to a Project. Contains its own set of Databases and Roles, cloned from its parent. | | Compute Endpoint | The actual running PostgreSQL instance that you connect to. It provides the CPU and RAM for processing queries. | The "server" or "engine" for your database. It can be started, suspended (scaled to zero), and resized. | Is attached to a single Branch. Your connection string points to a Compute Endpoint's hostname. | | Database | A logical container for your data (tables, schemas, views) within a branch. It follows standard PostgreSQL conventions. | A single database within a PostgreSQL server instance. | Exists within a Branch. A branch can have multiple databases. | | Role | A PostgreSQL role used for authentication (logging in) and authorization (permissions to access data). | A database user account with a username and password. | Belongs to a Branch. Roles from a parent branch are copied to child branches upon creation. | | API Key | A secret token used to authenticate requests to the Neon API. Keys have different scopes (Personal, Organization, Project-scoped). | A password for programmatic access, allowing you to manage all other Neon resources. | Authenticates actions on Organizations, Projects, Branches, etc. | | Operation | An asynchronous action performed by the Neon control plane, such as creating a branch or starting a compute. | A background job or task. Its status can be polled to know when an action is complete. | Associated with a Project and often a specific Branch or Endpoint. Essential for scripting API calls. | ### Installation To begin, install the SDK package into your project using pip: ```bash pip install neon-api ``` ### Authentication and Client Initialization All interactions with the Neon API require an API key. Store your key securely as an environment variable (e.g., `NEON_API_KEY`). Initialize the API client in your code. The SDK recommends using the `from_environ()` class method for security. ```python import os from neon_api import NeonAPI # Best practice: Load API key from environment variables api_key = os.getenv("NEON_API_KEY") if not api_key: raise ValueError("NEON_API_KEY environment variable is not set.") neon = NeonAPI(api_key=api_key) ``` ## API Keys Manage programmatic access to the Neon API. ### List API keys Description: Retrieves a list of all API keys associated with your Neon account. The response includes metadata about each key but does not include the secret key token itself. Method Signature: `neon.api_keys()` Parameters: None. Example Usage: ```python api_keys = neon.api_keys() print(f"API Keys: {api_keys}") # Example output: [ApiKeysListResponseItem(id=1234, name='api-key-name', created_at='xx', created_by=ApiKeyCreatorData(id='xx', name='user_name', image=''), last_used_from_addr='', last_used_at=None), ... other API keys] ``` Key Points & Best Practices: - Use this method to get the API Key Id required for revoking a key. ### Create API key Description: Creates a new API key with a specified name. The response includes the `id` and the secret `key` token. Method Signature: `neon.api_key_create(json)` Parameters: - `key_name` (string, required, passed as keyword argument): A descriptive name for the API key. Example Usage: ```python new_key_info = neon.api_key_create(key_name='my-python-script-key') print(f"ID: {new_key_info.id}") # Can be used for revoking the key later print(f"Key (store securely!): {new_key_info.key}") # Example: "napi_xxxx" ``` Key Points & Best Practices: - Store the Key Securely: The `key` token is only returned once upon creation. Store it immediately in a secure location. - Use descriptive names for keys to easily identify their purpose. ### Revoke API key Description: Revokes an existing API key, permanently disabling it. This action cannot be undone. Method Signature: `neon.api_key_revoke(api_key_id: str)` Parameters: - `api_key_id` (string, required): The unique identifier of the API key to revoke. Example Usage: ```python revoked_key_info = neon.api_key_revoke(1234) ``` Key Points & Best Practices: - You must know the `api_key_id` to revoke a key. Use `api_keys()` if you don't have it. ## Operations An operation is an action performed by the Neon Control Plane. It is crucial to monitor the status of long-running operations. ### List operations Description: Retrieves a list of operations for a specified project. Method Signature: `neon.operations(project_id: str, *, cursor: str = None, limit: int = None)` Parameters: - `project_id` (string, required): The ID of the project. - `limit` (int, optional): The number of operations to return. - `cursor` (string, optional): The pagination cursor. Example Usage: ```python project_ops = neon.operations(project_id='your-project-id') ``` The response is an `OperationsResponse` object with the following schema: ```python class OperationsResponse: operations: list[Operation] class Operation: id: str project_id: str action: OperationAction status: OperationStatus failures_count: int created_at: str updated_at: str total_duration_ms: int branch_id: Optional[str] = None endpoint_id: Optional[str] = None error: Optional[str] = None retry_at: Optional[str] = None class OperationAction(Enum): create_compute = 'create_compute' create_timeline = 'create_timeline' start_compute = 'start_compute' suspend_compute = 'suspend_compute' apply_config = 'apply_config' check_availability = 'check_availability' delete_timeline = 'delete_timeline' create_branch = 'create_branch' tenant_ignore = 'tenant_ignore' tenant_attach = 'tenant_attach' tenant_detach = 'tenant_detach' tenant_reattach = 'tenant_reattach' replace_safekeeper = 'replace_safekeeper' disable_maintenance = 'disable_maintenance' apply_storage_config = 'apply_storage_config' prepare_secondary_pageserver = 'prepare_secondary_pageserver' switch_pageserver = 'switch_pageserver' detach_parent_branch = 'detach_parent_branch' timeline_archive = 'timeline_archive' timeline_unarchive = 'timeline_unarchive' start_reserved_compute = 'start_reserved_compute' sync_dbs_and_roles_from_compute = 'sync_dbs_and_roles_from_compute' class OperationStatus(Enum): scheduling = 'scheduling' running = 'running' finished = 'finished' failed = 'failed' error = 'error' cancelling = 'cancelling' cancelled = 'cancelled' skipped = 'skipped' ``` ### Retrieve operation details Description: Retrieves the status and details of a single operation by its ID. Method Signature: `neon.operation(project_id: str, operation_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `operation_id` (string, required): The ID of the operation. Example Usage: ```python op_details = neon.operation(project_id='your-project-id', operation_id='op-id-123') ``` The response is an `OperationResponse` object with the following schema: ```python class OperationResponse: operation: Operation ``` ## Projects Manage your Neon projects. ### List projects Description: Retrieves a list of all projects for your account or organization. Method Signature: `neon.projects(*, shared: bool = False, cursor: str = None, limit: int = None)` Parameters: - `shared` (bool, optional): If `True`, retrieves projects shared with you. Defaults to `False`. - `limit` (int, optional): The number of projects to return. - `cursor` (string, optional): The pagination cursor. Example Usage: ```python all_projects = neon.projects() ``` The response is an `ProjectsResponse` object with the following schema: ```python class ProjectsResponse: projects: list[ProjectListItem] unavailable_project_ids: Optional[list[str]] = None class ProjectListItem: id: str platform_id: str region_id: str name: str provisioner: str pg_version: int proxy_host: str branch_logical_size_limit: int branch_logical_size_limit_bytes: int store_passwords: bool active_time: int cpu_used_sec: int creation_source: str created_at: str updated_at: str owner_id: str default_endpoint_settings: Optional[DefaultEndpointSettings] = None settings: Optional[ProjectSettingsData] = None maintenance_starts_at: Optional[str] = None synthetic_storage_size: Optional[int] = None quota_reset_at: Optional[str] = None compute_last_active_at: Optional[str] = None org_id: Optional[str] = None class DefaultEndpointSettings: pg_settings: Optional[dict[str, str]] = None pgbouncer_settings: Optional[dict[str, str]] = None autoscaling_limit_min_cu: Optional[float] = None autoscaling_limit_max_cu: Optional[float] = None suspend_timeout_seconds: Optional[int] = None class ProjectSettingsData: quota: Optional[ProjectQuota] = None allowed_ips: Optional[AllowedIps] = None enable_logical_replication: Optional[bool] = None maintenance_window: Optional[MaintenanceWindow] = None block_public_connections: Optional[bool] = None block_vpc_connections: Optional[bool] = None class ProjectQuota: active_time_seconds: Optional[int] = None compute_time_seconds: Optional[int] = None written_data_bytes: Optional[int] = None data_transfer_bytes: Optional[int] = None logical_size_bytes: Optional[int] = None class AllowedIps: ips: Optional[list[str]] = None protected_branches_only: Optional[bool] = None class MaintenanceWindow: weekdays: list[int] start_time: str end_time: str ``` ### Create project Description: Creates a new Neon project. Method Signature: `neon.project_create(json)` Parameters: - `project` (dict, required): A dictionary containing project settings. - `name` (string, optional): A name for the project. - `pg_version` (int, optional): Postgres version (e.g., 17). - `region_id` (string, optional): Region ID (e.g., `aws-us-east-1`). - Additional nested parameters shown in `ProjectCreateRequest` below. ```python class ProjectCreateRequest: project: Project1 class Project1: settings: Optional[ProjectSettingsData] = None name: Optional[str] = None branch: Optional[Branch] = None autoscaling_limit_min_cu: Optional[float] = None autoscaling_limit_max_cu: Optional[float] = None provisioner: Optional[str] = None region_id: Optional[str] = None default_endpoint_settings: Optional[DefaultEndpointSettings] = None pg_version: Optional[int] = None store_passwords: Optional[bool] = None history_retention_seconds: Optional[int] = None org_id: Optional[str] = None # the subclasses schema remain same as defined ealier ``` Example Usage: ```python new_project_response = neon.project_create( project={ 'name': 'my-new-python-project', 'pg_version': 17 } ) ``` The `new_project_response` object is an `ProjectResponse` object with the following schema: ```python class ProjectResponse: project: Project class Project: data_storage_bytes_hour: int data_transfer_bytes: int written_data_bytes: int compute_time_seconds: int active_time_seconds: int cpu_used_sec: int id: str platform_id: str region_id: str name: str provisioner: str pg_version: int proxy_host: str branch_logical_size_limit: int branch_logical_size_limit_bytes: int store_passwords: bool creation_source: str history_retention_seconds: int created_at: str updated_at: str consumption_period_start: str consumption_period_end: str owner_id: str default_endpoint_settings: Optional[DefaultEndpointSettings] = None settings: Optional[ProjectSettingsData] = None maintenance_starts_at: Optional[str] = None synthetic_storage_size: Optional[int] = None quota_reset_at: Optional[str] = None owner: Optional[ProjectOwnerData] = None compute_last_active_at: Optional[str] = None org_id: Optional[str] = None ``` ### Retrieve project details Description: Fetches detailed information for a single project by its ID. Method Signature: `neon.project(project_id: str)` Parameters: - `project_id` (string, required): The ID of the project. Example Usage: ```python project_details = neon.project(project_id='your-project-id') print(f"Project Details: {project_details}") ``` ### Update project Description: Updates the settings of an existing project. Method Signature: `neon.project_update(project_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - `project` (dict, required): A dictionary containing the settings to update (e.g., `{'name': 'new-name'}`). All available parameters for the dictionary `project` are as follows: - `project`: - `name` (string, optional): A new descriptive name for the project. - `history_retention_seconds` (integer, optional): The duration in seconds (0 to 2,592,000) to retain project history. - `default_endpoint_settings` (object, optional): New default settings for compute endpoints created in this project. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum `0.25`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). - `settings` (object, optional): Project-wide settings to update. - `quota` (object, optional): Per-project consumption quotas. - `active_time_seconds` (integer, optional): Wall-clock time allowance for active computes. - `compute_time_seconds` (integer, optional): CPU seconds allowance. - `written_data_bytes` (integer, optional): Data written allowance. - `data_transfer_bytes` (integer, optional): Data transferred allowance. - `logical_size_bytes` (integer, optional): Logical data size limit per branch. - `allowed_ips` (object, optional): Modifies the IP Allowlist. - `ips` (array of strings, optional): The new list of allowed IP addresses or CIDR ranges. - `protected_branches_only` (boolean, optional): If `true`, the IP allowlist applies only to protected branches. - `enable_logical_replication` (boolean, optional): Sets `wal_level=logical`. This is irreversible. - `maintenance_window` (object, optional): The time period for scheduled maintenance. - `weekdays` (array of integers, required if `maintenance_window` is set): Days of the week (1=Monday, 7=Sunday). - `start_time` (string, required if `maintenance_window` is set): Start time in "HH:MM" UTC format. - `end_time` (string, required if `maintenance_window` is set): End time in "HH:MM" UTC format. - `block_public_connections` (boolean, optional): If `true`, disallows connections from the public internet. - `block_vpc_connections` (boolean, optional): If `true`, disallows connections from VPC endpoints. - `audit_log_level` (string, optional): Sets the audit log level. Allowed values: `base`, `extended`, `full`. - `hipaa` (boolean, optional): Toggles HIPAA compliance settings. - `preload_libraries` (object, optional): Libraries to preload into compute instances. - `use_defaults` (boolean, optional): Toggles the use of default libraries. - `enabled_libraries` (array of strings, optional): A list of specific libraries to enable. Example Usage: ```python neon.project_update( project_id='your-project-id', project={ 'name': 'my-renamed-python-project', 'default_endpoint_settings': { 'autoscaling_limit_min_cu': 1, 'autoscaling_limit_max_cu': 1, } } ) ``` ### Delete project Description: Permanently deletes a project and all its resources. This action is irreversible. Method Signature: `neon.project_delete(project_id: str)` Parameters: - `project_id` (string, required): The ID of the project. Example Usage: ```python neon.project_delete(project_id='project-to-delete') ``` ### Retrieve connection URI Description: Gets a connection string for a specific database and role. Method Signature: `neon.connection_uri(project_id: str, database_name: str, role_name: str)` Parameters: - `project_id` (string, required): The ID of the project. - `database_name` (string, required): The name of the database. - `role_name` (string, required): The name of the role. Example Usage: ```python uri_info = neon.connection_uri( project_id='your-project-id', database_name='neondb', role_name='neondb_owner' ) print(f"Connection URI: {uri_info.uri}") ``` ## Branches Manage branches within a project. ### Create branch Description: Creates a new branch. Method Signature: `neon.branch_create(project_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - Other optional fields include: - `branch` (object, optional): Specifies the properties of the new branch. - `name` (string, optional): A name for the branch (max 256 characters). If omitted, a name is auto-generated. - `parent_id` (string, optional): The ID of the parent branch. If omitted, the project's default branch is used. - `parent_lsn` (string, optional): A Log Sequence Number (LSN) from the parent branch to create the new branch from a specific point-in-time. - `parent_timestamp` (string, optional): An ISO 8601 timestamp (e.g., `2025-08-26T12:00:00Z`) to create the branch from a specific point-in-time. - `protected` (boolean, optional): If `true`, the branch is created as a protected branch. - `init_source` (string, optional): `parent-data` (default) copies schema and data. `schema-only` creates a root branch with only the schema from the specified parent. - `expires_at` (string, optional): An RFC 3339 timestamp for when the branch should be automatically deleted (e.g., `2025-06-09T18:02:16Z`). - `endpoints` (array of objects, optional): A list of compute endpoints to create and attach to the new branch. - `type` (string, required): The endpoint type. Allowed values: `read_write`, `read_only`. - `autoscaling_limit_min_cu` (number, optional): Minimum Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): Maximum Compute Units (CU). Minimum `0.25`. - `provisioner` (string, optional): Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `suspend_timeout_seconds` (integer, optional): Inactivity period in seconds before suspension. Ranges from -1 (never) to 604800 (1 week). Example Usage: ```python new_branch_response = neon.branch_create( project_id='your-project-id', branch={'name': 'py-feature-branch'}, endpoints=[ {'type': 'read_write', 'autoscaling_limit_max_cu': 1} ] ) ``` The response if a `Branch1` object: ```python class Branch1: id: str project_id: str name: str current_state: str state_changed_at: str creation_source: str default: bool protected: bool cpu_used_sec: int compute_time_seconds: int active_time_seconds: int written_data_bytes: int data_transfer_bytes: int created_at: str updated_at: str parent_id: Optional[str] = None parent_lsn: Optional[str] = None parent_timestamp: Optional[str] = None pending_state: Optional[str] = None logical_size: Optional[int] = None primary: Optional[bool] = None last_reset_at: Optional[str] = None created_by: Optional[CreatedBy] = None class CreatedBy: name: Optional[str] = None image: Optional[str] = None ``` ### List branches Description: Retrieves a list of branches for a project. Method Signature: `neon.branches(project_id: str, *, cursor: str = None, limit: int = None)` Parameters: - `project_id` (string, required): The ID of the project. - `cursor` (string, optional): Pagination cursor. - `limit` (int, optional): Number of branches to return. Example Usage: ```python project_branches = neon.branches(project_id='your-project-id') ``` The response is a list of `Branch1` objects:`BranchesResponse` ```python class BranchesResponse: branches: list[Branch1] ``` ### Retrieve branch details Description: Fetches detailed information for a single branch. Method Signature: `neon.branch(project_id: str, branch_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. Example Usage: ```python branch_details = neon.branch(project_id='your-project-id', branch_id='br-your-branch-id') ``` The response is a `BranchResponse` object: ```python class BranchResponse: branch: Branch1 ``` ### Update branch Description: Updates the properties of a specified branch. Method Signature: `neon.branch_update(project_id: str, branch_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `branch` (dict, required): A dictionary with properties to update (e.g., `{'name': 'new-name'}`). Example Usage: ```python updated_branch = neon.branch_update( project_id='your-project-id', branch_id='br-your-branch-id', branch={'name': 'updated-py-branch'} ) print(f"Updated Branch: {updated_branch}") ``` ### Delete branch Description: Permanently deletes a branch. Method Signature: `neon.branch_delete(project_id: str, branch_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch to delete. Example Usage: ```python neon.branch_delete(project_id='your-project-id', branch_id='br-branch-to-delete') ``` ### List branch endpoints Description: Retrieves a list of all compute endpoints associated with a specific branch. Method: `neon.endpoints(project_id: str)` The response is an `EndpointsResponse` object: ```python class EndpointsResponse: endpoints: list[Endpoint] class Endpoint: host: str id: str project_id: str branch_id: str autoscaling_limit_min_cu: float autoscaling_limit_max_cu: float region_id: str type: EndpointType current_state: EndpointState settings: EndpointSettingsData pooler_enabled: bool pooler_mode: EndpointPoolerMode disabled: bool passwordless_access: bool creation_source: str created_at: str updated_at: str proxy_host: str suspend_timeout_seconds: int provisioner: str pending_state: Optional[EndpointState] = None last_active: Optional[str] = None compute_release_version: Optional[str] = None class EndpointType(Enum): read_only = 'read_only' read_write = 'read_write' class EndpointState(Enum): init = 'init' active = 'active' idle = 'idle' class EndpointSettingsData: pg_settings: Optional[dict[str, str]] = None pgbouncer_settings: Optional[dict[str, str]] = None class EndpointPoolerMode(Enum): transaction = 'transaction' ``` You can filter the result by `branch_id` in your code. ### Create database Description: Creates a new database within a branch. Method Signature: `neon.database_create(project_id: str, branch_id: str, json)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `database` (dict, required): A dictionary with `{'name': 'db-name', 'owner_name': 'role-name'}`. Example Usage: ```python new_db_info = neon.database_create( project_id='your-project-id', branch_id='br-your-branch-id', database={'name': 'my-app-db', 'owner_name': 'neondb_owner'} ) ``` The response is a `DatabaseResponse` object: ```python class DatabaseResponse: database: Database class Database: id: int branch_id: str name: str owner_name: str created_at: str updated_at: str ``` ### List databases Description: Retrieves a list of all databases within a branch. Method Signature: `neon.databases(project_id: str, branch_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. Example Usage: ```python branch_databases = neon.databases(project_id='your-project-id', branch_id='br-your-branch-id') ``` The response is a `DatabasesResponse` object with the following schema: ```python class DatabasesResponse: databases: list[Database] ``` ### Retrieve database details Description: Retrieves details for a specific database. _Note: The Python SDK uses `database_id`, but you should provide the `database_name`._ Method Signature: `neon.database(project_id: str, branch_id: str, database_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `database_id` (string, required): The name of the database. Example Usage: ```python db_details = neon.database( project_id='your-project-id', branch_id='br-your-branch-id', database_id='my-app-db' # Use the database name here ) print(f"Database Details: {db_details}") ``` The response is a `DatabaseResponse` object. ### Update database Description: Updates the properties of a database. Method Signature: `neon.database_update(project_id: str, branch_id: str, database_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `database_id` (string, required): The current name of the database. - `database` (dict, required): A dictionary with properties to update (e.g., `{'name': 'new-name'}`). Example Usage: ```python updated_db = neon.database_update( project_id='your-project-id', branch_id='br-your-branch-id', database_id='my-app-db', # Current database name database={'name': 'my-renamed-db'} ) print(f"Updated Database: {updated_db}") ``` ### Delete database Description: Deletes a database from a branch. Method Signature: `neon.database_delete(project_id: str, branch_id: str, database_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `database_id` (string, required): The name of the database to delete. Example Usage: ```python neon.database_delete(project_id='your-project-id', branch_id='br-your-branch-id', database_id='my-renamed-db') ``` ### List roles Description: Retrieves a list of all Postgres roles from a branch. Method Signature: `neon.roles(project_id: str, branch_id: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. Example Usage: ```python branch_roles = neon.roles(project_id='your-project-id', branch_id='br-your-branch-id') ``` The response if a `RolesResponse` object: ```python class RolesResponse: roles: list[Role] class Role: branch_id: str name: str created_at: str updated_at: str password: Optional[str] = None protected: Optional[bool] = None ``` ### Create role Description: Creates a new Postgres role. Method Signature: `neon.role_create(project_id: str, branch_id: str, role_name: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `role_name` (string, required): The name of the role to create. Example Usage: ```python new_role_info = neon.role_create( project_id='your-project-id', branch_id='br-your-branch-id', role_name='py_app_user' ) print(f"Role created: {new_role_info.role}") ``` ### Retrieve role details Description: Retrieves details for a specific role. Method Signature: `neon.role(project_id: str, branch_id: str, role_name: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `role_name` (string, required): The name of the role to create. Example Usage: ```python role_details = neon.role( project_id='your-project-id', branch_id='br-your-branch-id', role_name='py_app_user' ) # Example response: RoleResponse(role=Role(branch_id='br-your-branch-id', name='py_app_user', created_at='xx', updated_at='xx', password=None, protected=False)) ``` ### Delete role Description: Deletes a Postgres role. Method Signature: `neon.role_delete(project_id: str, branch_id: str, role_name: str)` Parameters: - `project_id` (string, required): The ID of the project. - `branch_id` (string, required): The ID of the branch. - `role_name` (string, required): The name of the role to create. Example Usage: ```python neon.role_delete(project_id='your-project-id', branch_id='br-your-branch-id', role_name='py_app_user') ``` ## Endpoints Manage compute endpoints. ### Create compute endpoint Description: Creates a new compute endpoint. Method Signature: `neon.endpoint_create(project_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - `endpoint` (dict required): - `branch_id` (string, required): The ID of the branch to associate the endpoint with. - `type` (string, required): `read_write` or `read_only`. - `autoscaling_limit_min_cu?` (number): Minimum Compute Units. - `autoscaling_limit_max_cu?` (number): Maximum Compute Units. - `suspend_timeout_seconds?` (integer): Inactivity seconds before suspension. Example Usage: ```python new_endpoint_info = neon.endpoint_create( project_id='your-project-id', endpoint={ 'branch_id': 'br-your-branch-id', 'type': 'read_only' } ) ``` ### Retrieve compute endpoint details Description: Fetches details for a single compute endpoint. Method Signature: `neon.endpoint(project_id: str, endpoint_id: str)` Example Usage: ```python endpoint_details = neon.endpoint(project_id='your-project-id', endpoint_id='ep-your-endpoint-id') ``` The response is an `EndpointResponse` object: ```python class EndpointResponse: endpoint: Endpoint ``` ### Update compute endpoint Description: Updates the configuration of an endpoint. Method Signature: `neon.endpoint_update(project_id: str, endpoint_id: str, json)` Parameters (passed as keyword arguments): - `project_id` (string, required): The ID of the project. - `endpoint_id`(string, required): The ID of the endpoint. - `endpoint` (dict, required): - `autoscaling_limit_min_cu?` (number): New minimum Compute Units. - `autoscaling_limit_max_cu?` (number): New maximum Compute Units. - `suspend_timeout_seconds?` (integer): New suspension timeout. - `disabled?` (boolean): Set to `true` to disable connections or `false` to enable them. Example Usage: ```python updated_endpoint = neon.endpoint_update( project_id='your-project-id', endpoint_id='ep-your-endpoint-id', endpoint={'autoscaling_limit_max_cu': 2} ) print(f"Updated endpoint: {updated_endpoint}") ``` ### Delete compute endpoint Description: Deletes a compute endpoint. Method Signature: `neon.endpoint_delete(project_id: str, endpoint_id: str)` Example Usage: ```python neon.endpoint_delete(project_id='your-project-id', endpoint_id='ep-to-delete') ``` ### Start compute endpoint Description: Manually starts an `idle` compute endpoint. Method Signature: `neon.endpoint_start(project_id: str, endpoint_id: str)` Example Usage: ```python neon.endpoint_start(project_id='your-project-id', endpoint_id='ep-your-endpoint-id') ``` ### Suspend compute endpoint Description: Manually suspends an `active` compute endpoint. Method Signature: `neon.endpoint_suspend(project_id: str, endpoint_id: str)` Example Usage: ```python neon.endpoint_suspend(project_id='your-project-id', endpoint_id='ep-your-endpoint-id') ``` ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-serverless.txt # AI Rules Neon Serverless Driver > The document outlines the AI Rules for the Neon Serverless Driver, detailing how AI-driven optimizations are implemented to enhance database performance and efficiency within Neon's serverless architecture. ## Source - [AI Rules Neon Serverless Driver HTML](https://neon.com/docs/ai/ai-rules-neon-serverless): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) Repository: - [README](https://github.com/neondatabase-labs/ai-rules#readme) - [neon-serverless.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-serverless.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-serverless.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon Serverless code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use these rules to query your Neon database using the Neon Serverless driver globs: *.tsx, *.ts alwaysApply: false --- # Neon Serverless Driver Guidelines ## Overview This guide provides specific patterns and best practices for connecting to Neon databases in serverless environments using the `@neondatabase/serverless` driver. The driver connects over **HTTP** for fast, single queries or **WebSockets** for `node-postgres` compatibility and interactive transactions. Follow these guidelines to ensure efficient connections and optimal performance. ## Installation Install the Neon Serverless driver with the correct package name: ```bash # Using npm npm install @neondatabase/serverless # Using JSR bunx jsr add @neon/serverless ``` **Note:** The driver version 1.0.0 and higher requires **Node.js v19 or later**. For projects that depend on `pg` but want to use Neon's WebSocket-based connection pool: ```json "dependencies": { "pg": "npm:@neondatabase/serverless@^0.10.4" }, "overrides": { "pg": "npm:@neondatabase/serverless@^0.10.4" } ``` Avoid incorrect package names like `neon-serverless` or `pg-neon`. ## Connection String Always use environment variables for database connection strings to avoid exposing credentials. ```typescript // For HTTP queries import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL!); // For WebSocket connections import { Pool } from '@neondatabase/serverless'; const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); ``` Never hardcode credentials in your code: ```typescript // AVOID: Hardcoded credentials const sql = neon('postgres://username:password@host.neon.tech/neondb'); ``` ## Querying with the `neon` function (HTTP) The `neon()` function is ideal for simple, "one-shot" queries in serverless and edge environments as it uses HTTP `fetch` and is the fastest method for single queries. ### Parameterized Queries Use tagged template literals for safe parameter interpolation. This is the primary defense against SQL injection. ```typescript const [post] = await sql`SELECT * FROM posts WHERE id = ${postId}`; ``` For manually constructed queries, use the `.query()` method with a parameter array: ```typescript const [post] = await sql.query('SELECT * FROM posts WHERE id = $1', [postId]); ``` **Do not** concatenate user input directly into SQL strings: ```typescript // AVOID: SQL Injection Risk const [post] = await sql('SELECT * FROM posts WHERE id = ' + postId); ``` ### Configuration Options You can configure the `neon()` function to change the result format. ```typescript // Return rows as arrays instead of objects const sqlArrayMode = neon(process.env.DATABASE_URL!, { arrayMode: true }); const rows = await sqlArrayMode`SELECT id, title FROM posts`; // rows -> [[1, "First Post"], [2, "Second Post"]] // Get full results including row count and field metadata const sqlFull = neon(process.env.DATABASE_URL!, { fullResults: true }); const result = await sqlFull`SELECT * FROM posts LIMIT 1`; /* result -> { rows: [{ id: 1, title: 'First Post', ... }], fields: [...], rowCount: 1, ... } */ ``` ## Querying with `Pool` and `Client` (WebSockets) Use the `Pool` and `Client` classes for `node-postgres` compatibility, interactive transactions, or session support. This method uses WebSockets. ### WebSocket Configuration In Node.js environments version 21 and earlier, a WebSocket implementation must be provided. ```typescript import { Pool, neonConfig } from '@neondatabase/serverless'; import ws from 'ws'; // This is only required for Node.js < v22 neonConfig.webSocketConstructor = ws; const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); // ... use pool ``` ### Serverless Lifecycle Management When using a `Pool` in a serverless function, the connection must be created, used, and closed within the same invocation. ```typescript // Example for Vercel Edge Functions export default async (req: Request, ctx: ExecutionContext) => { // Create pool inside the request handler const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); try { const { rows } = await pool.query('SELECT * FROM users'); return new Response(JSON.stringify(rows)); } catch (err) { console.error(err); return new Response('Database error', { status: 500 }); } finally { // End the pool connection before the function execution completes ctx.waitUntil(pool.end()); } } ``` Avoid creating a global `Pool` instance outside the handler, as it may not be closed properly, leading to exhausted connections. ## Handling Transactions ### HTTP Transactions (`sql.transaction()`) For running multiple queries in a single, non-interactive transaction over HTTP, use `sql.transaction()`. This is efficient and recommended for atomicity without the overhead of a persistent WebSocket. ```typescript const [newUser, newProfile] = await sql.transaction([ sql`INSERT INTO users(name) VALUES(${name}) RETURNING id`, sql`INSERT INTO profiles(user_id, bio) VALUES(${userId}, ${bio})` ], { // Optional transaction settings isolationLevel: 'ReadCommitted', readOnly: false }); ``` ### Interactive Transactions (`Client`) For complex transactions that require conditional logic, use a `Client` from a `Pool`. ```typescript const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); const client = await pool.connect(); try { await client.query('BEGIN'); const { rows: [{ id }] } = await client.query( 'INSERT INTO users(name) VALUES($1) RETURNING id', [name] ); await client.query( 'INSERT INTO profiles(user_id, bio) VALUES($1, $2)', [id, bio] ); await client.query('COMMIT'); } catch (err) { await client.query('ROLLBACK'); throw err; } finally { client.release(); await pool.end(); // also close pool if no longer needed } ``` Always include proper error handling and rollback mechanisms. ## Environment-Specific Optimizations Apply environment-specific optimizations for best performance: ```javascript // For Vercel Edge Functions, specify nearest region export const config = { runtime: 'edge', regions: ['iad1'], // Region nearest to your Neon DB }; // For Cloudflare Workers, consider using Hyperdrive instead // https://neon.tech/blog/hyperdrive-neon-faq ``` ## Library Integration (ORMs) Integrate with popular ORMs by providing the appropriate driver interface. ### Drizzle ORM Drizzle supports both HTTP and WebSocket clients. Choose the one that fits your needs: - **With `neon()` (HTTP):** Use `drizzle-orm/neon-http`. Best for serverless/edge. - **With `Pool` (WebSocket):** Use `drizzle-orm/neon-serverless`. ```typescript import { neon, neonConfig, Pool } from '@neondatabase/serverless'; import { drizzle as drizzleWs } from 'drizzle-orm/neon-serverless'; import { drizzle as drizzleHttp } from 'drizzle-orm/neon-http'; import ws from 'ws'; const connectionString = process.env.DATABASE_URL!; neonConfig.webSocketConstructor = ws; // Only required for Node.js < v22 const sql = neon(connectionString); const pool = new Pool({ connectionString }); export const drizzleClientHttp = drizzleHttp({ client: sql }); export const drizzleClientWs = drizzleWs({ client: pool }); ``` ### Prisma Prisma supports both HTTP and WebSocket clients. Choose the one that fits your needs: ```typescript import { neonConfig } from '@neondatabase/serverless'; import { PrismaNeon, PrismaNeonHTTP } from '@prisma/adapter-neon'; import { PrismaClient } from '@prisma/client'; import ws from 'ws'; const connectionString = process.env.DATABASE_URL; neonConfig.webSocketConstructor = ws; const adapterHttp = new PrismaNeonHTTP(connectionString!, {}); export const prismaClientHttp = new PrismaClient({ adapter: adapterHttp }); const adapterWs = new PrismaNeon({ connectionString }); export const prismaClientWs = new PrismaClient({ adapter: adapterWs }); ``` ### Kysely Use the `PostgresDialect` with a `Pool` instance. ```typescript import { Pool } from '@neondatabase/serverless'; import { Kysely, PostgresDialect } from 'kysely'; const dialect = new PostgresDialect({ pool: new Pool({ connectionString: process.env.DATABASE_URL }) }); const db = new Kysely({ dialect, // schema definitions... }); ``` **NOTE:** Do not pass the `neon()` function to ORMs that expect a `node-postgres` compatible `Pool`. Use the appropriate adapter or dialect with a `new Pool()`. ## Error Handling Implement proper error handling for database operations: ```javascript // Pool error handling const pool = new Pool({ connectionString: process.env.DATABASE_URL }); pool.on('error', (err) => { console.error('Unexpected error on idle client', err); process.exit(-1); }); // Query error handling try { const [post] = await sql`SELECT * FROM posts WHERE id = ${postId}`; if (!post) { return new Response('Not found', { status: 404 }); } } catch (err) { console.error('Database query failed:', err); return new Response('Server error', { status: 500 }); } ``` ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-toolkit.txt # AI Rules: The @neondatabase/toolkit > The document "AI Rules: The @neondatabase/toolkit" outlines guidelines and best practices for utilizing AI tools within the Neon database environment, ensuring efficient and effective integration of AI functionalities. ## Source - [AI Rules: The @neondatabase/toolkit HTML](https://neon.com/docs/ai/ai-rules-neon-toolkit): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Get started with the @neondatabase/toolkit](https://neon.com/docs/reference/neondatabase-toolkit) - [Neon TypeScript SDK Reference](https://neon.com/docs/reference/typescript-sdk) - [Neon API Reference](https://neon.com/docs/reference/api-reference) Repository: - [`@neondatabase/toolkit` on npm](https://www.npmjs.com/package/@neondatabase/toolkit) - [`@neon/toolkit` on JSR](https://jsr.io/@neon/toolkit) - [neon-toolkit.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-toolkit.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-toolkit.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with @neondatabase/toolkit code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use these rules to programmatically create, query, and destroy ephemeral Neon Postgres databases. Ideal for quick prototyping, testing, and temporary workflows. globs: *.ts, *.tsx alwaysApply: false --- ## Neon Toolkit integration guidelines This file provides comprehensive rules and best practices for interacting with Neon using the `@neondatabase/toolkit`. This toolkit is a terse client designed specifically for scenarios where a temporary, fully-functional Postgres database is needed quickly. As an AI agent, this is your primary tool for tasks requiring database interaction without manual setup. The toolkit bundles the `@neondatabase/api-client` and the `@neondatabase/serverless` driver to streamline the entire lifecycle of a database: creation, querying, and deletion. ### Neon Core Concepts To use the toolkit effectively, you must understand Neon's resource hierarchy. While the toolkit simplifies this, knowing the concepts is crucial if you need to use the underlying `apiClient`. | Concept | Description | Analogy/Purpose | Key Relationship | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | Organization | The highest-level container, managing billing, users, and multiple projects. | A GitHub Organization or a company's cloud account. | Contains one or more Projects. | | Project | The primary container that contains all related database resources for a single application or service. | A Git repository or a top-level folder for an application. | Lives within an Organization (or a personal account). Contains Branches. | | Branch | A lightweight, copy-on-write clone of a database's state at a specific point in time. | A `git branch`. Used for isolated development, testing, staging, or previews without duplicating storage costs. | Belongs to a Project. Contains its own set of Databases and Roles, cloned from its parent. | | Compute Endpoint | The actual running PostgreSQL instance that you connect to. It provides the CPU and RAM for processing queries. | The "server" or "engine" for your database. It can be started, suspended (scaled to zero), and resized. | Is attached to a single Branch. Your connection string points to a Compute Endpoint's hostname. | | Database | A logical container for your data (tables, schemas, views) within a branch. It follows standard PostgreSQL conventions. | A single database within a PostgreSQL server instance. | Exists within a Branch. A branch can have multiple databases. | | Role | A PostgreSQL role used for authentication (logging in) and authorization (permissions to access data). | A database user account with a username and password. | Belongs to a Branch. Roles from a parent branch are copied to child branches upon creation. | | API Key | A secret token used to authenticate requests to the Neon API. Keys have different scopes (Personal, Organization, Project-scoped). | A password for programmatic access, allowing you to manage all other Neon resources. | Authenticates actions on Organizations, Projects, Branches, etc. | | Operation | An asynchronous action performed by the Neon control plane, such as creating a branch or starting a compute. | A background job or task. Its status can be polled to know when an action is complete. | Associated with a Project and often a specific Branch or Endpoint. Essential for scripting API calls. | ### Installation To begin, install the toolkit package into the user's project: ```bash # Using npm npm install @neondatabase/toolkit # Using JSR with Deno deno add jsr:@neon/toolkit ``` ### Authentication and client initialization All interactions require a Neon API key. This key must be provided by the user, typically as an environment variable (`NEON_API_KEY`). Initialize the toolkit in your code. This is the entry point for all toolkit operations. ```typescript import { NeonToolkit } from '@neondatabase/toolkit'; // Best practice: Load API key from environment variables const apiKey = process.env.NEON_API_KEY; if (!apiKey) { throw new Error('NEON_API_KEY environment variable is not set.'); } const toolkit = new NeonToolkit(apiKey); ``` ### The core toolkit workflow The toolkit is designed around a simple, three-step lifecycle: **Create -> Query -> Delete**. #### 1. Create a project Description: Creates a new, fully-configured Neon project. This single asynchronous call handles project creation, default branch setup, and returns an object containing everything needed for the next steps, including the database connection string. Method Signature: `toolkit.createProject(projectOptions?: ProjectCreateRequest['project']): Promise` Parameters: - `projectOptions` (object, optional): An object to customize the new project. - `name` (string): A descriptive name for the project. - `pg_version` (number): The major Postgres version (e.g., `16`). - `region_id` (string): The cloud region for the project (e.g., `aws-us-east-1`). Returns: A `Promise` that resolves to a `ToolkitProject` object. This object contains: - `project`: Details of the created project. - `connectionURIs`: An array of connection strings. Use `connectionURIs[0].connection_uri`. - `roles`, `databases`, `branches`, `endpoints`: Information about the default resources created. Example Usage: ```typescript // Create a project with default settings const project = await toolkit.createProject(); console.log(`Project created. Connection URI: ${project.connectionURIs[0].connection_uri}`); // Create a customized project const customizedProject = await toolkit.createProject({ name: 'ai-agent-database', pg_version: 16, }); console.log(`Project "${customizedProject.project.name}" created.`); ``` #### 2. Execute SQL queries Description: Runs SQL queries against the created project's database. This method uses the Neon Serverless Driver, which automatically handles the connection using the provided `ToolkitProject` object. Method Signature: `toolkit.sql(project: ToolkitProject, query: string): Promise` Parameters: - `project` (`ToolkitProject`, required): The project object returned by `toolkit.createProject()`. - `query` (string, required): The SQL string to execute. Returns: A `Promise` that resolves to the query result, typically an array of row objects for `SELECT` statements. Example usage: ```typescript // `project` is the object from the previous step // DDL Statement (schema modification) await toolkit.sql( project, `CREATE TABLE IF NOT EXISTS tasks (id SERIAL PRIMARY KEY, description TEXT, completed BOOLEAN DEFAULT FALSE);` ); // DML Statement (data insertion) await toolkit.sql(project, `INSERT INTO tasks (description) VALUES ('Analyze user feedback');`); // DQL Statement (data retrieval) const tasks = await toolkit.sql(project, `SELECT * FROM tasks WHERE completed = FALSE;`); console.log('Incomplete tasks:', tasks); // Output: [ { id: 1, description: 'Analyze user feedback', completed: false } ] ``` #### 3. Delete the project Description: Permanently deletes the Neon project and all of its associated resources (data, branches, endpoints). This is the crucial cleanup step for ephemeral workflows. **This action is irreversible.** Method Signature: `toolkit.deleteProject(project: ToolkitProject): Promise` Parameters: - `project` (`ToolkitProject`, required): The project object returned by `toolkit.createProject()`. Example Usage: ```typescript // `project` is the object from the create step await toolkit.deleteProject(project); console.log('Project has been successfully deleted.'); ``` ### Complete lifecycle example Always structure your logic to ensure the `deleteProject` call is made, even if errors occur during the SQL execution phase. Using a `try...finally` block is a robust pattern for this. ```typescript import { NeonToolkit } from '@neondatabase/toolkit'; async function runTemporaryDatabaseTask() { const apiKey = process.env.NEON_API_KEY; if (!apiKey) { throw new Error('NEON_API_KEY is not set.'); } const toolkit = new NeonToolkit(apiKey); let project; try { // 1. Create console.log('Creating temporary project...'); project = await toolkit.createProject({ name: 'ephemeral-task-runner' }); console.log(`Project created with ID: ${project.project.id}`); // 2. Query console.log('Setting up schema and inserting data...'); await toolkit.sql( project, `CREATE TABLE logs (message TEXT, timestamp TIMESTAMPTZ DEFAULT NOW());` ); await toolkit.sql(project, `INSERT INTO logs (message) VALUES ('Task started');`); const logs = await toolkit.sql(project, `SELECT message FROM logs;`); console.log('Retrieved logs:', logs); } catch (error) { console.error('An error occurred during the database task:', error); } finally { // 3. Delete if (project) { console.log('Cleaning up and deleting project...'); await toolkit.deleteProject(project); console.log('Project deleted.'); } } } runTemporaryDatabaseTask(); ``` ### Accessing the underlying API Client For advanced operations beyond the toolkit's scope (e.g., creating a new branch, managing roles, listing all projects), you can access the full Neon TypeScript SDK instance via the `apiClient` property. Use this when the user asks for an operation that the toolkit does not directly expose. ```typescript const apiClient = toolkit.apiClient; // Now you can use the full power of the Neon API // Example: List all projects in the user's account const { data } = await apiClient.listProjects({}); console.log( 'All projects in your account:', data.projects.map((p) => p.name) ); ``` ```` --- # Source: https://neon.com/llms/ai-ai-rules-neon-typescript-sdk.txt # AI Rules: Neon TypeScript SDK > The document outlines the AI Rules for the Neon TypeScript SDK, detailing how to implement and manage AI-driven functionalities within Neon's database environment using TypeScript. ## Source - [AI Rules: Neon TypeScript SDK HTML](https://neon.com/docs/ai/ai-rules-neon-typescript-sdk): The original HTML version of this documentation **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. Related docs: - [Get started with Neon Typescript SDK](https://neon.com/docs/reference/typescript-sdk) - [Neon API Reference](https://neon.com/docs/reference/api-reference) Repository: - [`@neondatabase/api-client` on npm](https://www.npmjs.com/package/@neondatabase/api-client) - [neon-typescript-sdk.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-typescript-sdk.mdc) ## How to use You can use these rules in two ways: ## Option 1: Copy from this page With Cursor, save the [rules](https://docs.cursor.com/context/rules-for-ai#project-rules-recommended) to `.cursor/rules/neon-typescript-sdk.mdc` and they'll be automatically applied when working with matching files (`*.ts`, `*.tsx`). For other AI tools, you can include these rules as context when chatting with your AI assistant - check your tool's documentation for the specific method (like using "Include file" or context commands). ## Option 2: Clone from repository If you prefer, you can clone or download the rules directly from our [AI Rules repository](https://github.com/neondatabase-labs/ai-rules). Once added to your project, AI tools will automatically use these rules when working with Neon TypeScript SDK code. You can also reference them explicitly in prompts. ## Rules ````md --- description: Use these rules to manage your Neon projects, branches, databases, and other resources programmatically using the Neon TypeScript SDK. globs: *.ts, *.tsx alwaysApply: false --- ## Neon TypeScript SDK integration guidelines This file provides comprehensive rules and best practices for interacting with the Neon API using the `@neondatabase/api-client` TypeScript SDK. Following these guidelines will enable an AI Agent like you to build robust, efficient, and error-tolerant integrations with Neon. The SDK is a wrapper around the Neon REST API and provides typed methods for managing all Neon resources, including projects, branches, endpoints, roles, and databases. ### Neon Core Concepts To effectively use the Neon Typescript SDK, it's essential to understand the hierarchy and purpose of its core resources. The following table provides a high-level overview of each concept. | Concept | Description | Analogy/Purpose | Key Relationship | | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | Organization | The highest-level container, managing billing, users, and multiple projects. | A GitHub Organization or a company's cloud account. | Contains one or more Projects. | | Project | The primary container that contains all related database resources for a single application or service. | A Git repository or a top-level folder for an application. | Lives within an Organization (or a personal account). Contains Branches. | | Branch | A lightweight, copy-on-write clone of a database's state at a specific point in time. | A `git branch`. Used for isolated development, testing, staging, or previews without duplicating storage costs. | Belongs to a Project. Contains its own set of Databases and Roles, cloned from its parent. | | Compute Endpoint| The actual running PostgreSQL instance that you connect to. It provides the CPU and RAM for processing queries. | The "server" or "engine" for your database. It can be started, suspended (scaled to zero), and resized. | Is attached to a single Branch. Your connection string points to a Compute Endpoint's hostname. | | Database | A logical container for your data (tables, schemas, views) within a branch. It follows standard PostgreSQL conventions. | A single database within a PostgreSQL server instance. | Exists within a Branch. A branch can have multiple databases. | | Role | A PostgreSQL role used for authentication (logging in) and authorization (permissions to access data). | A database user account with a username and password. | Belongs to a Branch. Roles from a parent branch are copied to child branches upon creation. | | API Key | A secret token used to authenticate requests to the Neon API. Keys have different scopes (Personal, Organization, Project-scoped). | A password for programmatic access, allowing you to manage all other Neon resources. | Authenticates actions on Organizations, Projects, Branches, etc. | | Operation | An asynchronous action performed by the Neon control plane, such as creating a branch or starting a compute. | A background job or task. Its status can be polled to know when an action is complete. | Associated with a Project and often a specific Branch or Endpoint. Essential for scripting API calls. | ### Installation To begin, install the SDK package into your project: ```bash npm install @neondatabase/api-client ``` ### Understanding API Key Types When performing actions via the API, you must select the correct type of API key based on the required scope and permissions. There are three types: 1. Personal API Key - Scope: Accesses all projects that the user who created the key is a member of. - Permissions: The key has the same permissions as its owner. If the user's access is revoked from an organization, the key loses access too. - Best For: Individual use, scripting, and tasks tied to a specific user's permissions. - Created By: Any user. 2. Organization API Key - Scope: Accesses all projects and resources within an entire organization. - Permissions: Has admin-level access across the organization, independent of any single user. It remains valid even if the creator leaves the organization. - Best For: CI/CD pipelines, organization-wide automation, and service accounts that need broad access. - Created By: Organization administrators only. 3. Project-scoped API Key - Scope: Access is strictly limited to a single, specified project. - Permissions: Cannot perform organization-level actions (like creating new projects) or delete the project it is scoped to. This is the most secure and limited key type. - Best For: Project-specific integrations, third-party services, or automation that should be isolated to one project. - Created By: Any organization member. ### Authentication and Client Initialization All interactions with the Neon API require an API key. Store your key securely as an environment variable (e.g., `NEON_API_KEY`). Initialize the API client in your code. This client instance will be used for all subsequent API calls. ```typescript import { createApiClient } from '@neondatabase/api-client'; // Best practice: Load API key from environment variables const apiKey = process.env.NEON_API_KEY; if (!apiKey) { throw new Error('NEON_API_KEY environment variable is not set.'); } const apiClient = createApiClient({ apiKey }); ``` ## API Keys Manage programmatic access to the Neon API. ### List API keys Description: Retrieves a list of all API keys associated with your Neon account. The response includes metadata about each key but does not include the secret key token itself for security reasons. Method Signature: `apiClient.listApiKeys()` Parameters: None. Example Usage: ```typescript const response = await apiClient.listApiKeys(); console.log('API Keys:', response.data); // Example output: [{id: 1234, name: "my-api-key", created_at: "xx", created_by: { id: "xx", name: "USER_NAME"},last_used_at: "xx",last_used_from_addr: "IP_ADDRESS"}] ``` Key Points & Best Practices: - Use this method to get the `key_id` required for revoking a key. ### Create API key Description: Creates a new API key with a specified name. The response includes the `id` and the secret `key` token. Method Signature: `apiClient.createApiKey(data: ApiKeyCreateRequest)` Parameters: - `data` (`ApiKeyCreateRequest`): - `key_name` (string, required): A descriptive name for the API key. Example Usage: ```typescript const response = await apiClient.createApiKey({ key_name: 'my-automation-script-key' }); console.log('ID:', response.data.id); // can be used for revoking the key later console.log('Key (store securely!):', response.data.key); // Example: "napi_xxxx" ``` Key Points & Best Practices: - Store the Key Securely: The `key` token is only returned once upon creation. Store it immediately in a secure location like a secret manager or an `.env` file. You cannot retrieve it later. - Use descriptive names for keys to easily identify their purpose. ### Revoke API key Description: Revokes an existing API key, permanently disabling it. This action cannot be undone. Method Signature: `apiClient.revokeApiKey(keyId: number)` Parameters: - `keyId` (number, required): The unique identifier of the API key to revoke. Example Usage: ```typescript const response = await apiClient.revokeApiKey(1234); console.log(`API key with ID ${response.data.id} has been revoked.`); ``` Key Points & Best Practices: - Revoke keys that are no longer in use or may have been compromised. - You must know the `keyId` to revoke a key. Use `listApiKeys` if you don't have it. ## Operations An operation is an action performed by the Neon Control Plane (e.g., `create_branch`, `start_compute`). When using the SDK programmatically, it is crucial to monitor the status of long-running operations to ensure one has completed before starting another that depends on it. Operations older than 6 months may be deleted from Neon's systems. ### List operations Description: Retrieves a list of operations for a specified project. Method Signature: `apiClient.listProjectOperations(params: ListProjectOperationsParams)` Parameters: - `params` (`ListProjectOperationsParams`): - `projectId` (string, required): The ID of the project. - `limit?` (number): The number of operations to return. - `cursor?` (string): The pagination cursor. Example Usage: ```typescript const response = await apiClient.listProjectOperations({ projectId: 'your-project-id' }); console.log(`Operations for project ${projectId}:`, response.data.operations); // Example output: [{ id: "xx", project_id: "your-project-id", branch_id: "xxx", endpoint_id: "xxx", action: "start_compute", status: "finished", failures_count: 0, created_at: "xxx", updated_at: "2025-09-15T02:15:35Z", total_duration_ms: 239,}, ...] ``` ### Retrieve operation details Description: Retrieves the status and details of a single operation by its ID. Method Signature: `apiClient.getProjectOperation(projectId: string, operationId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `operationId` (string, required): The ID of the operation. Example Usage: ```typescript const response = await apiClient.getProjectOperation('your-project-id', 'your-operation-id'); ``` ## Projects Manage your Neon projects. ### List projects Description: Retrieves a list of all projects for your account or organization. Method Signature: `apiClient.listProjects(query: ListProjectsParams)` Parameters: - `limit?` (integer): Specifies the number of projects to return. (Min: 1, Max: 400, Default: 10) - `cursor?` (string): Used for pagination. Provide the `cursor` value from a previous response to fetch the next set of projects. - `search?` (string): Filters projects by a partial match on the project `name` or `id`. - `org_id?` (string): Filters projects by a specific organization ID. Example Usage: ```typescript const response = await apiClient.listProjects({}); console.log('Projects:', response.data.projects); // Example response: [{ id: "your-project-id", platform_id: "aws", region_id: "aws-us-east-2", name: "project-name", provisioner: "k8s-neonvm", default_endpoint_settings: { autoscaling_limit_min_cu: 0.25, autoscaling_limit_max_cu: 2, suspend_timeout_seconds: 0 }, settings: { allowed_ips: { ips: [], protected_branches_only: false }, enable_logical_replication: false, maintenance_window: { weekdays: [ 3 ], start_time: "06:00", end_time: "07:00" }, block_public_connections: false, block_vpc_connections: false, hipaa: false }, pg_version: 17, proxy_host: "us-east-2.aws.neon.tech", branch_logical_size_limit: 512, branch_logical_size_limit_bytes: 536870912, store_passwords: true, active_time: 0, cpu_used_sec: 0, creation_source: "console", created_at: "xx", updated_at: "xx", synthetic_storage_size: 31277056, quota_reset_at: "xx", owner_id: "owner-id", compute_last_active_at: "2025-08-20T06:50:15Z", org_id: "org-id", history_retention_seconds: 86400 }, ...] ``` ### Create project Description: Creates a new Neon project with a specified name, Postgres version, and region. Method Signature: `apiClient.createProject(data: ProjectCreateRequest)` Parameters: - `data` (`ProjectCreateRequest`): - `project` (object, required): The main container for all project settings. - `name` (string, optional): A descriptive name for the project (1-256 characters). If omitted, the project name will be identical to its generated ID. - `pg_version` (integer, optional): The major Postgres version. Defaults to `17`. Supported versions: 14, 15, 16, 17, 18. - `region_id` (string, optional): The identifier for the region where the project will be created (e.g., `aws-us-east-1`). - `org_id` (string, optional): The ID of an organization to which the project will belong. Required if using an Organization API key. - `store_passwords` (boolean, optional): Whether to store role passwords in Neon. Storing passwords is required for features like the SQL Editor and integrations. - `history_retention_seconds` (integer, optional): The duration in seconds (0 to 2,592,000) to retain project history for features like Point-in-Time Restore. Defaults to 86400 (1 day). - `provisioner` (string, optional): The compute provisioner. Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `default_endpoint_settings` (object, optional): Default settings for new compute endpoints created in this project. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum value is `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum value is `0.25`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). - `settings` (object, optional): Project-wide settings. - `quota` (object, optional): Per-project consumption quotas. A zero or empty value means "unlimited". - `active_time_seconds` (integer, optional): Wall-clock time allowance for active computes. - `compute_time_seconds` (integer, optional): CPU seconds allowance. - `written_data_bytes` (integer, optional): Data written allowance. - `data_transfer_bytes` (integer, optional): Data transferred allowance. - `logical_size_bytes` (integer, optional): Logical data size limit per branch. - `allowed_ips` (object, optional): Configures the IP Allowlist. - `ips` (array of strings, optional): A list of allowed IP addresses or CIDR ranges. - `protected_branches_only` (boolean, optional): If `true`, the IP allowlist applies only to protected branches. - `enable_logical_replication` (boolean, optional): Sets `wal_level=logical`. - `maintenance_window` (object, optional): The time period for scheduled maintenance. - `weekdays` (array of integers, required if `maintenance_window` is set): Days of the week (1=Monday, 7=Sunday). - `start_time` (string, required if `maintenance_window` is set): Start time in "HH:MM" UTC format. - `end_time` (string, required if `maintenance_window` is set): End time in "HH:MM" UTC format. - `branch` (object, optional): Configuration for the project's default branch. - `name` (string, optional): The name for the default branch. Defaults to `main`. - `role_name` (string, optional): The name for the default role. Defaults to `{database_name}_owner`. - `database_name` (string, optional): The name for the default database. Defaults to `neondb`. Example Usage: ```typescript const response = await apiClient.createProject({ project: { name: name, pg_version: 17, region_id: 'aws-us-east-2'}, }); console.log('Project created:', response.data.project); console.log('Connection URI:', response.data.connection_uris[0]?.connection_uri); // Example: "postgresql://neondb_owner:xxxx@ep-muddy-brook-aevd5iky.c-2.us-east-2.aws.neon.tech/neondb?sslmode=require" ``` ### Retrieve project details Description: Fetches detailed information for a single project by its ID. Method Signature: `apiClient.getProject(projectId: string)` Parameters: - `projectId` (string, required): The ID of the project. Example Usage: ```typescript const response = await apiClient.getProject('your-project-id'); console.log('Project Details:', response.data.project); // Example response: { data_storage_bytes_hour: 6706234656, data_transfer_bytes: 1482607, written_data_bytes: 38603544, compute_time_seconds: 9567, active_time_seconds: 35236, cpu_used_sec: 9567, id: "your-project-id", platform_id: "azure", region_id: "azure-westus3", name: "your-project-name", provisioner: "k8s-neonvm", default_endpoint_settings: { autoscaling_limit_min_cu: 0.25, autoscaling_limit_max_cu: 2, suspend_timeout_seconds: 0 }, settings: { allowed_ips: { ips: [], protected_branches_only: false }, enable_logical_replication: false, maintenance_window: { weekdays: [ 4 ], start_time: "06:00", end_time: "07:00" }, block_public_connections: false, block_vpc_connections: false, hipaa: false }, pg_version: 17, proxy_host: "westus3.azure.neon.tech", branch_logical_size_limit: 512, branch_logical_size_limit_bytes: 536870912, store_passwords: true, creation_source: "console", history_retention_seconds: 86400, created_at: "xx", updated_at: "xx", synthetic_storage_size: 34690488, consumption_period_start: "xx", consumption_period_end: "xx", owner_id: "owner-id", owner: { email: "owner@email.com", name: "owner_name", branches_limit: 20, subscription_type: "free_v3"}, compute_last_active_at: "2025-09-16T03:40:57Z", org_id: "org-id" } ``` ### Update project Description: Updates the settings of an existing project, such as its name. Method Signature: `apiClient.updateProject(projectId: string, data: ProjectUpdateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `data` (`ProjectCreateRequest`): - `name` (string, optional): A new descriptive name for the project. - `history_retention_seconds` (integer, optional): The duration in seconds (0 to 2,592,000) to retain project history. - `default_endpoint_settings` (object, optional): New default settings for compute endpoints created in this project. - `autoscaling_limit_min_cu` (number, optional): The minimum number of Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): The maximum number of Compute Units (CU). Minimum `0.25`. - `suspend_timeout_seconds` (integer, optional): Duration of inactivity in seconds before a compute is suspended. Ranges from -1 (never suspend) to 604800 (1 week). A value of `0` uses the default of 300 seconds (5 minutes). - `settings` (object, optional): Project-wide settings to update. - `quota` (object, optional): Per-project consumption quotas. - `active_time_seconds` (integer, optional): Wall-clock time allowance for active computes. - `compute_time_seconds` (integer, optional): CPU seconds allowance. - `written_data_bytes` (integer, optional): Data written allowance. - `data_transfer_bytes` (integer, optional): Data transferred allowance. - `logical_size_bytes` (integer, optional): Logical data size limit per branch. - `allowed_ips` (object, optional): Modifies the IP Allowlist. - `ips` (array of strings, optional): The new list of allowed IP addresses or CIDR ranges. - `protected_branches_only` (boolean, optional): If `true`, the IP allowlist applies only to protected branches. - `enable_logical_replication` (boolean, optional): Sets `wal_level=logical`. This is irreversible. - `maintenance_window` (object, optional): The time period for scheduled maintenance. - `weekdays` (array of integers, required if `maintenance_window` is set): Days of the week (1=Monday, 7=Sunday). - `start_time` (string, required if `maintenance_window` is set): Start time in "HH:MM" UTC format. - `end_time` (string, required if `maintenance_window` is set): End time in "HH:MM" UTC format. - `block_public_connections` (boolean, optional): If `true`, disallows connections from the public internet. - `block_vpc_connections` (boolean, optional): If `true`, disallows connections from VPC endpoints. - `audit_log_level` (string, optional): Sets the audit log level. Allowed values: `base`, `extended`, `full`. - `hipaa` (boolean, optional): Toggles HIPAA compliance settings. - `preload_libraries` (object, optional): Libraries to preload into compute instances. - `use_defaults` (boolean, optional): Toggles the use of default libraries. - `enabled_libraries` (array of strings, optional): A list of specific libraries to enable. Example Usage: ```typescript // Example: Update a project's name await apiClient.updateProject(projectId, { project: { name: 'newNameForProject' }}); ``` ### Delete project Description: Permanently deletes a project and all its associated resources (branches, databases, roles). This action is irreversible. Method Signature: `apiClient.deleteProject(projectId: string)` Parameters: - `projectId` (string, required): The ID of the project. Example Usage: ```typescript await apiClient.deleteProject('projectid-to-delete'); ``` ### Retrieve connection URI Description: Gets a complete connection string for a specific database and role within a branch in a project. Method Signature: `apiClient.getConnectionUri(params: GetConnectionUriParams)` Parameters: - `params` (`GetConnectionUriParams`): - `projectId` (string, required) - `branch_id?` (string): Defaults to the project's primary branch. - `database_name` (string, required) - `role_name` (string, required) - `pooled?` (boolean): If `true`, returns the pooled connection string. Example Usage: ```typescript const response = await apiClient.getConnectionUri({ projectId: 'your-project-id', database_name: 'dbName', role_name: 'roleName', pooled: true }); console.log('Pooled Connection URI:', response.data.uri); // Example: "postgresql://neondb_owner:xxx@ep-xx-pooler.westus3.azure.neon.tech/neondb?channel_binding=require&sslmode=require" ``` ## Branches Manage branches within a project. Branches in Neon are copy-on-write clones, allowing for isolated development, testing, and production environments without duplicating data. ### Create branch Description: Creates a new branch from a parent branch. You can optionally create a compute endpoint at the same time and specify a point-in-time from the parent's history to branch from. Method Signature: `apiClient.createProjectBranch(projectId: string, data?: BranchCreateRequest)` Parameters: - `projectId` (string, required): The ID of the project where the branch will be created. - `data` (`BranchCreateRequest`, optional): - `branch` (object, optional): Specifies the properties of the new branch. - `name` (string, optional): A name for the branch (max 256 characters). If omitted, a name is auto-generated. - `parent_id` (string, optional): The ID of the parent branch. If omitted, the project's default branch is used. - `parent_lsn` (string, optional): A Log Sequence Number (LSN) from the parent branch to create the new branch from a specific point-in-time. - `parent_timestamp` (string, optional): An ISO 8601 timestamp (e.g., `2025-08-26T12:00:00Z`) to create the branch from a specific point-in-time. - `protected` (boolean, optional): If `true`, the branch is created as a protected branch. - `init_source` (string, optional): `parent-data` (default) copies schema and data. `schema-only` creates a root branch with only the schema from the specified parent. - `expires_at` (string, optional): An RFC 3339 timestamp for when the branch should be automatically deleted (e.g., `2025-06-09T18:02:16Z`). - `endpoints` (array of objects, optional): A list of compute endpoints to create and attach to the new branch. - `type` (string, required): The endpoint type. Allowed values: `read_write`, `read_only`. - `autoscaling_limit_min_cu` (number, optional): Minimum Compute Units (CU). Minimum `0.25`. - `autoscaling_limit_max_cu` (number, optional): Maximum Compute Units (CU). Minimum `0.25`. - `provisioner` (string, optional): Specify `k8s-neonvm` to enable Autoscaling. Allowed values: `k8s-pod`, `k8s-neonvm`. - `suspend_timeout_seconds` (integer, optional): Inactivity period in seconds before suspension. Ranges from -1 (never) to 604800 (1 week). Example Usage: ```typescript import { EndpointType } from '@neondatabase/api-client'; const response = await apiClient.createProjectBranch('your-project-id', { branch: { name: 'feature-branch-x' }, endpoints: [{ type: EndpointType.ReadWrite, autoscaling_limit_max_cu: 1 }], }); console.log('Branch created:', response.data.branch); // Example response: {"id":"your-branch-id","project_id":"your-project-id","parent_id":"parent-branch-id","parent_lsn":"0/1BB6D40","name":"feature-branch-x","current_state":"init","pending_state":"ready","state_changed_at":"xx","creation_source":"console","primary":false,"default":false,"protected":false,"cpu_used_sec":0,"compute_time_seconds":0,"active_time_seconds":0,"written_data_bytes":0,"data_transfer_bytes":0,"created_at":"xx","updated_at":"xx","created_by":{"name":"user_name","image":""},"init_source":"parent-data"} console.log('Endpoint created:', response.data.endpoints[0]); // Example response: {"host":"ep-xxx.ap-southeast-1.aws.neon.tech","id":"ep-xxx","project_id":"your-project-id","branch_id":"your-branch-id","autoscaling_limit_min_cu":0.25,"autoscaling_limit_max_cu":1,"region_id":"aws-ap-southeast-1","type":"read_write","current_state":"init","pending_state":"active","settings":{},"pooler_enabled":false,"pooler_mode":"transaction","disabled":false,"passwordless_access":true,"creation_source":"console","created_at":"xx","updated_at":"xx","proxy_host":"ap-southeast-1.aws.neon.tech","suspend_timeout_seconds":0,"provisioner":"k8s-neonvm"} // The `response.data` object also contains `operations`, `roles`, `databases` and `connection_uris` ``` ### List branches Description: Retrieves a list of branches for the specified project. Supports filtering, sorting, and pagination. Method Signature: `apiClient.listProjectBranches(params: ListProjectBranchesParams)` Parameters: - `params` (`ListProjectBranchesParams`): - `projectId` (string, required): The ID of the project. - `search?` (string): Filters branches by a partial match on name or ID. - `sort_by?` (string): Field to sort by. Allowed: `name`, `created_at`, `updated_at`. Default: `updated_at`. - `sort_order?` (string): Sort order. Allowed: `asc`, `desc`. Default: `desc`. - `limit?` (integer): Number of branches to return (1-10000). - `cursor?` (string): Pagination cursor from a previous response. Example Usage: ```typescript const response = await apiClient.listProjectBranches({ projectId: 'your-project-id' }); // Example response: {"branches":[{"id":"branch-id","project_id":"project-id","parent_id":"parent-branch-id","parent_lsn":"0/1BB6D40","parent_timestamp":"xx","name":"feature-branch-x","current_state":"ready","state_changed_at":"xx","logical_size":30842880,"creation_source":"console","primary":false,"default":false,"protected":false,"cpu_used_sec":0,"compute_time_seconds":0,"active_time_seconds":0,"written_data_bytes":0,"data_transfer_bytes":0,"created_at":"xx","updated_at":"xx","created_by":{"name":"user_name","image":""},"init_source":"parent-data"}, ...other branches details]} ``` ### Retrieve branch details Description: Fetches detailed information for a single branch by its ID. Method Signature: `apiClient.getProjectBranch(projectId: string, branchId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. Example Usage: ```typescript const response = await apiClient.getProjectBranch('your-project-id', 'br-your-branch-id'); // Example response: { branch: { ... branch details } } ``` ### Update branch Description: Updates the properties of a specified branch, such as its name or protection status. Method Signature: `apiClient.updateProjectBranch(projectId: string, branchId: string, data: BranchUpdateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch to update. - `data` (`BranchUpdateRequest`): - `branch` (object, required): - `name?` (string): A new name for the branch. - `protected?` (boolean): `true` to protect the branch, `false` to unprotect. - `expires_at?` (string | null): Branch new expiration timestamp or `null` to remove expiration. Example Usage: ```typescript const response = await apiClient.updateProjectBranch('your-project-id', 'br-your-branch-id', { branch: { name: 'updated-feature-branch' }, }); ``` ### Delete branch Description: Permanently deletes a branch. This action will idle any associated compute endpoints. Method Signature: `apiClient.deleteProjectBranch(projectId: string, branchId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch to delete. Key Points & Best Practices: - You cannot delete a project's default branch. - You cannot delete a branch that has child branches. Delete children first. Example Usage: ```typescript await apiClient.deleteProjectBranch('your-project-id', 'br-branch-to-delete'); ``` ### List branch endpoints Description: Retrieves a list of all compute endpoints associated with a specific branch. Method Signature: `apiClient.listProjectBranchEndpoints(projectId: string, branchId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. Example Usage: ```typescript const response = await apiClient.listProjectBranchEndpoints('your-project-id', 'br-your-branch-id'); // Example response: { endpoints: [... endpoints details] } ``` ### List databases Description: Retrieves a list of all databases within a specified branch. Method Signature: `apiClient.listProjectBranchDatabases(projectId: string, branchId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. Example Usage: ```typescript const response = await apiClient.listProjectBranchDatabases('your-project-id', 'br-your-branch-id'); // Example response: { databases: [{ id: 39700786, branch_id: "br-your-branch-id", name: "neondb", owner_name: "neondb_owner", created_at: "xx", updated_at: "xx" }, ...other databases if they exist] } ``` ### Create database Description: Creates a new database within a specified branch. Method Signature: `apiClient.createProjectBranchDatabase(projectId: string, branchId: string, data: DatabaseCreateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `data` (`DatabaseCreateRequest`): - `database` (object, required): - `name` (string, required): The name for the new database. - `owner_name` (string, required): The name of an existing role that will own the database. Example Usage: ```typescript await apiClient.createProjectBranchDatabase('your-project-id', 'br-your-branch-id', { database: { name: 'my-app-db', owner_name: 'neondb_owner' }, }); ``` ### Retrieve database details Description: Retrieves detailed information about a specific database within a branch. Method Signature: `apiClient.getProjectBranchDatabase(projectId: string, branchId: string, databaseName: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `databaseName` (string, required): The name of the database. Example Usage: ```typescript const response = await apiClient.getProjectBranchDatabase('your-project-id', 'br-your-branch-id', 'my-app-db'); ``` ### Update database Description: Updates the properties of a specified database, such as its name or owner. Method Signature: `apiClient.updateProjectBranchDatabase(projectId: string, branchId: string, databaseName: string, data: DatabaseUpdateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `databaseName` (string, required): The current name of the database to update. - `data` (`DatabaseUpdateRequest`): - `database` (object, required): - `name?` (string): A new name for the database. - `owner_name?` (string): The name of a different existing role to become the new owner. Example Usage: ```typescript const response = await apiClient.updateProjectBranchDatabase('your-project-id', 'br-your-branch-id', 'my-app-db', { database: { name: 'my-renamed-app-db' }, }); ``` ### Delete database Description: Deletes the specified database from a branch. This action is permanent. Method Signature: `apiClient.deleteProjectBranchDatabase(projectId: string, branchId: string, databaseName: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `databaseName` (string, required): The name of the database. Example Usage: ```typescript await apiClient.deleteProjectBranchDatabase('your-project-id', 'br-your-branch-id', 'my-renamed-app-db'); ``` ### List roles Description: Retrieves a list of all Postgres roles from the specified branch. Method Signature: `apiClient.listProjectBranchRoles(projectId: string, branchId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. Example Usage: ```typescript const response = await apiClient.listProjectBranchRoles('your-project-id', 'br-your-branch-id'); // Example response: { roles: [{ branch_id: "br-your-branch-id", name: "neondb_owner", protected: false, created_at: "xx", updated_at: "xx"}, ... other roles if they exist] } ``` ### Create role Description: Creates a new Postgres role in a specified branch. The response includes the role's generated password. Method Signature: `apiClient.createProjectBranchRole(projectId: string, branchId: string, data: RoleCreateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `data` (`RoleCreateRequest`): - `role` (object, required): - `name` (string, required): The name for the new role (max 63 bytes). - `no_login?` (boolean): If `true`, creates a role that cannot log in. Example Usage: ```typescript const response = await apiClient.createProjectBranchRole('your-project-id', 'br-your-branch-id', { role: { name: 'demo_user' }, }); console.log('Role created:', response.data.role.name); console.log('Password (store securely!):', response.data.role.password); ``` ### Retrieve role details Description: Retrieves detailed information about a specific Postgres role. Method Signature: `apiClient.getProjectBranchRole(projectId: string, branchId: string, roleName: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `roleName` (string, required): The role name to retrieve details Example Usage: ```typescript const response = await apiClient.getProjectBranchRole('your-project-id', 'br-your-branch-id', 'demo_user'); // Example response: { branch_id: "br-your-branch-id", name: "demo_user", protected: false, created_at: "xx", updated_at: "xx" } ``` ### Delete role Description: Deletes the specified Postgres role from the branch. Method Signature: `apiClient.deleteProjectBranchRole(projectId: string, branchId: string, roleName: string)` Parameters: - `projectId` (string, required): The ID of the project. - `branchId` (string, required): The ID of the branch. - `roleName` (string, required): The role name to delete Example Usage: ```typescript await apiClient.deleteProjectBranchRole('your-project-id', 'br-your-branch-id', 'demo_user'); ``` ## Endpoints Manage compute endpoints, which are the Postgres instances that connect to your branches. ### Create compute endpoint Description: Creates a new compute endpoint and associates it with a branch. Method Signature: `apiClient.createProjectEndpoint(projectId: string, data: EndpointCreateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `data` (`EndpointCreateRequest`): - `endpoint` (object, required): - `branch_id` (string, required): The ID of the branch to associate the endpoint with. - `type` (string, required): `read_write` or `read_only`. - `autoscaling_limit_min_cu?` (number): Minimum Compute Units. - `autoscaling_limit_max_cu?` (number): Maximum Compute Units. - `suspend_timeout_seconds?` (integer): Inactivity seconds before suspension. Example Usage: ```typescript import { EndpointType } from '@neondatabase/api-client'; const response = await apiClient.createProjectEndpoint('your-project-id', { endpoint: { branch_id: 'br-your-branch-id', type: EndpointType.ReadOnly }, }); // Example response: {"endpoint":{"host":"ep-xxx.neon.tech","id":"ep-endpoint-id","project_id":"your-project-id","branch_id":"br-your-branch-id","autoscaling_limit_min_cu":0.25,"autoscaling_limit_max_cu":2,"region_id":"aws-ap-southeast-1","type":"read_only","current_state":"init","pending_state":"active","settings":{},"pooler_enabled":false,"pooler_mode":"transaction","disabled":false,"passwordless_access":true,"creation_source":"console","created_at":"xx","updated_at":"xx","proxy_host":"ap-southeast-1.aws.neon.tech","suspend_timeout_seconds":0,"provisioner":"k8s-neonvm"}} ``` ### List compute endpoints Description: Retrieves a list of all compute endpoints for a project (includes endpoints of all branches) Method Signature: `apiClient.listProjectEndpoints(projectId: string)` Parameters: - `projectId` (string, required): The ID of the project. Example Usage: ```typescript const response = await apiClient.listProjectEndpoints('your-project-id'); // Example response: {"endpoints": [... all endpoint details]} ``` ### Retrieve compute endpoint details Description: Fetches detailed information for a single compute endpoint. Method Signature: `apiClient.getProjectEndpoint(projectId: string, endpointId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the specific compute endpoint to retrieve the details. Example Usage: ```typescript const response = await apiClient.getProjectEndpoint('your-project-id', 'ep-your-endpoint-id'); ``` ### Update compute endpoint Description: Updates the configuration of a specified compute endpoint. Method Signature: `apiClient.updateProjectEndpoint(projectId: string, endpointId: string, data: EndpointUpdateRequest)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the endpoint to update. - `data` (`EndpointUpdateRequest`): - `endpoint` (object, required): - `autoscaling_limit_min_cu?` (number): New minimum Compute Units. - `autoscaling_limit_max_cu?` (number): New maximum Compute Units. - `suspend_timeout_seconds?` (integer): New suspension timeout. - `disabled?` (boolean): Set to `true` to disable connections or `false` to enable them. Example Usage: ```typescript const response = await apiClient.updateProjectEndpoint('your-project-id', 'ep-your-endpoint-id', { endpoint: { autoscaling_limit_max_cu: 2 }, }); ``` ### Delete compute endpoint Description: Deletes a compute endpoint. This will drop all active connections. Method Signature: `apiClient.deleteProjectEndpoint(projectId: string, endpointId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the endpoint to delete. Example Usage: ```typescript await apiClient.deleteProjectEndpoint('your-project-id', 'ep-endpoint-to-delete'); ``` ### Start compute endpoint Description: Manually starts an `idle` compute endpoint. Method Signature: `apiClient.startProjectEndpoint(projectId: string, endpointId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the endpoint to start. Example Usage: ```typescript const response = await apiClient.startProjectEndpoint('your-project-id', 'ep-your-endpoint-id'); ``` ### Suspend compute endpoint Description: Manually suspends an `active` compute endpoint. Method Signature: `apiClient.suspendProjectEndpoint(projectId: string, endpointId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the endpoint to suspend. Example Usage: ```typescript await apiClient.suspendProjectEndpoint('your-project-id', 'ep-your-endpoint-id'); ``` ### Restart compute endpoint Description: Restarts a compute endpoint by suspending and then starting it. Throws error if endpoint is not active (already suspended) Method Signature: `apiClient.restartProjectEndpoint(projectId: string, endpointId: string)` Parameters: - `projectId` (string, required): The ID of the project. - `endpointId` (string, required): The ID of the endpoint to restart. Example Usage: ```typescript await apiClient.restartProjectEndpoint('your-project-id', 'ep-your-endpoint-id'); ``` ## Organizations Manage organizations, members, and organization-scoped API keys. ### Retrieve organization details Description: Retrieves detailed information about a specific organization. Method Signature: `apiClient.getOrganization(orgId: string)` Parameters: - `orgId` (string, required): The organization ID Example Usage: ```typescript const response = await apiClient.getOrganization('org-your-org-id'); ``` ### List organization API keys Description: Retrieves a list of all API keys for a specified organization. Method Signature: `apiClient.listOrgApiKeys(orgId: string)` Parameters: - `orgId` (string, required): The organization ID Example Usage: ```typescript const response = await apiClient.listOrgApiKeys('org-your-org-id'); ``` ### Create organization API key Description: Creates a new API key for an organization. Can be scoped to the entire org or a single project. Method Signature: `apiClient.createOrgApiKey(orgId: string, data: OrgApiKeyCreateRequest)` Parameters: - `orgId` (string, required): The organization ID - `data` (`OrgApiKeyCreateRequest`): - `key_name` (string, required): A name for the key. - `project_id?` (string): If provided, restricts the key's access to this project. Example Usage: ```typescript const response = await apiClient.createOrgApiKey('org-your-org-id', { key_name: 'ci-key-for-project-abc', project_id: 'project-abc-id', }); ``` ### Revoke organization API key Description: Permanently revokes an organization API key. Method Signature: `apiClient.revokeOrgApiKey(orgId: string, keyId: number)` Parameters: - `orgId` (string, required): The organization ID - `keyId` (number, required): The key id of the api key to revoke Example Usage: ```typescript await apiClient.revokeOrgApiKey('org-your-org-id', 12345); ``` ### Retrieve organization members details Description: Retrieves a list of all members in an organization. Method Signature: `apiClient.getOrganizationMembers(orgId: string)` Parameters: - `orgId` (string, required): The organization ID Example Usage: ```typescript const response = await apiClient.getOrganizationMembers('org-your-org-id'); ``` ### Retrieve organization member details Description: Retrieves information about a single member of an organization. Method Signature: `apiClient.getOrganizationMember(orgId: string, memberId: string)` Parameters: - `orgId` (string, required): The organization ID - `memberId` (string, required): The member ID to retrieve the details Example Usage: ```typescript const response = await apiClient.getOrganizationMember('org-your-org-id', 'member-uuid'); ``` ### Update role for organization member Description: Updates the role of a member within an organization. Only admins can perform this action. Method Signature: `apiClient.updateOrganizationMember(orgId: string, memberId: string, data: OrganizationMemberUpdateRequest)` Parameters: - `orgId` (string, required): The organization ID - `memberId` (string, required): The member ID to update. - `data` (`OrganizationMemberUpdateRequest`): - `role` (string, required): The new role. Allowed: `admin`, `member`. Example Usage: ```typescript import { MemberRole } from '@neondatabase/api-client'; await apiClient.updateOrganizationMember('org-your-org-id', 'member-uuid', { role: MemberRole.Admin }); ``` ### Remove member from organization Description: Removes a member from an organization. Only admins can perform this action. Method Signature: `apiClient.removeOrganizationMember(orgId: string, memberId: string)` Parameters: - `orgId` (string, required): The organization ID - `memberId` (string, required): The member ID to remove. Example Usage: ```typescript await apiClient.removeOrganizationMember('org-your-org-id', 'member-uuid-to-remove'); ``` ### Retrieve organization invitation details Description: Retrieves a list of outstanding invitations for an organization. Method Signature: `apiClient.getOrganizationInvitations(orgId: string)` Parameters: - `orgId` (string, required): The organization ID Example Usage: ```typescript const response = await apiClient.getOrganizationInvitations('org-your-org-id'); ``` ### Create organization invitations Description: Creates and sends email invitations for users to join an organization. Method Signature: `apiClient.createOrganizationInvitations(orgId: string, data: OrganizationInvitesCreateRequest)` Parameters: - `orgId` (string, required): The organization ID - `data` (`OrganizationInvitesCreateRequest`): - `invitations` (array of objects, required): - `email` (string, required): The email address to invite. - `role` (string, required): The role for the invited user. Allowed: `admin`, `member`. Example Usage: ```typescript import { MemberRole } from '@neondatabase/api-client'; await apiClient.createOrganizationInvitations('org-your-org-id', { invitations: [{ email: 'new.dev@example.com', role: MemberRole.Member }], }); ``` ## Error Handling The SDK uses `axios` under the hood and throws `AxiosError` for API failures. Always wrap API calls in `try...catch` blocks to handle potential errors gracefully. Error Structure: - `error.response.status`: The HTTP status code (e.g., `401`, `404`, `429`). - `error.response.data`: The error payload from the Neon API, usually containing a `code` and `message`. Example Error Handling: ```typescript async function safeApiOperation(projectId: string) { try { const response = await apiClient.getProject(projectId); return response.data; } catch (error: any) { if (error.isAxiosError) { const status = error.response?.status; const data = error.response?.data; console.error(`API Error: Status ${status}`); console.error(`Message: ${data?.message}`); switch (status) { case 401: console.error("Authentication error: Check your NEON_API_KEY."); break; case 404: console.error(`Resource not found for project ID: ${projectId}`); break; case 429: console.error("Rate limit exceeded. Please wait before retrying."); break; default: console.error("An unexpected API error occurred."); } } else { console.error("A non-API error occurred:", error.message); } return null; } } ``` Common Status Codes: - `401 Unauthorized`: Your API key is invalid or missing. - `403 Forbidden`: Your API key does not have permission for this action. - `404 Not Found`: The requested resource (project, branch, etc.) does not exist. - `422 Unprocessable Entity`: The request body is invalid. Check your parameters. - `429 Too Many Requests`: You have exceeded the API rate limit. - `500 Internal Server Error`: An error occurred on Neon's side. ```` --- # Source: https://neon.com/llms/ai-ai-rules.txt # AI rules and prompts > The "AI Rules and Prompts" document outlines guidelines and examples for creating effective AI prompts within the Neon platform, ensuring consistent and accurate AI-driven interactions. ## Source - [AI rules and prompts HTML](https://neon.com/docs/ai/ai-rules): The original HTML version of this documentation Boost your productivity with AI context rules for Neon. These rules help AI assistants understand Neon's features, leading to more accurate code suggestions and fewer common mistakes. If you're using **Claude Code**, install the comprehensive Neon plugin that bundles Skills, MCP integration, and all context rules in one package. For other AI tools like **Cursor**, use the individual `.mdc` context rule files. **Note** AI Rules are in Beta: AI Rules are currently in beta. We're actively improving them and would love to hear your feedback. Join us on [Discord](https://discord.gg/92vNTzKDGp) to share your experience and suggestions. ## For Claude Code If you're using Claude Code, install the Neon plugin to get Skills, MCP integration, and all the context rules in one package: - [Claude Code plugin for Neon](https://neon.com/docs/ai/ai-claude-code-plugin): Install the Neon Claude Code plugin to give Claude access to Neon's APIs, Postgres workflows, and built-in Skills ## Individual AI rules For other AI tools like Cursor, you can use these individual `.mdc` context rule files. Copy them to your AI tool's custom rules directory — the format is tool-agnostic and works with any AI assistant that supports context rules. - [Neon Auth](https://neon.com/docs/ai/ai-rules-neon-auth): AI rules for implementing authentication with Neon - [Neon Serverless Driver](https://neon.com/docs/ai/ai-rules-neon-serverless): AI rules for serverless database connections - [Neon Drizzle](https://neon.com/docs/ai/ai-rules-neon-drizzle): AI rules for using Drizzle ORM with Neon - [Neon TypeScript SDK](https://neon.com/docs/ai/ai-rules-neon-typescript-sdk): AI rules for using the Neon TypeScript SDK - [Neon Python SDK](https://neon.com/docs/ai/ai-rules-neon-python-sdk): AI rules for using the Neon Python SDK - [Neon API](https://neon.com/docs/ai/ai-rules-neon-api): AI rules for using the Neon API - [Neon Toolkit](https://neon.com/docs/ai/ai-rules-neon-toolkit): AI rules for using the Neon Toolkit ## How it works AI rules are `.mdc` files that specify which types of files they apply to (such as `*.tsx` or `schema.sql`). When you're working with a matching file, your AI tool automatically applies the relevant rules to provide better suggestions. ### Example: AI rules in action Here's a practical example using [Cursor](https://www.cursor.so). A developer has implemented authentication in their server-rendered page and wants to confirm best practices: **Developer query**: _"Using the neon-auth.mdc rule, how do I secure a server-rendered page?"_ The AI confirms that using `stackServerApp.getUser({ or: "redirect" })` is the correct approach for server-side authentication, providing additional context and explanation. ## Add rules to your project All `.mdc` files are available in the [Neon AI Rules toolkit repository](https://github.com/neondatabase-labs/ai-rules). Copy the files you need to your project's `.cursor/rules` folder (or your AI tool's equivalent): ```text .cursor/ rules/ neon-auth.mdc neon-serverless.mdc neon-drizzle.mdc neon-toolkit.mdc neon-typescript-sdk.mdc neon-python-sdk.mdc neon-api-guidelines.mdc neon-api-projects.mdc neon-api-branches.mdc neon-api-endpoints.mdc neon-api-organizations.mdc neon-api-keys.mdc neon-api-operations.mdc ``` Most AI tools will automatically apply these rules when you're working with Neon-related code. You can also reference them explicitly in prompts for more targeted assistance. --- # Source: https://neon.com/llms/ai-ai-scale-with-neon.txt # Scale your AI application with Neon > The document "Scale your AI application with Neon" guides users on optimizing AI applications using Neon's scalable database infrastructure, detailing steps for efficient data management and processing to enhance application performance. ## Source - [Scale your AI application with Neon HTML](https://neon.com/docs/ai/ai-scale-with-neon): The original HTML version of this documentation You can scale your AI application built on Postgres with `pgvector` in the same way you would any Postgres app: Vertically with added CPU, RAM, and storage, or horizontally with read replicas. In Neon, scaling vertically is a matter of selecting the desired compute size. Neon supports compute sizes ranging from .025 vCPU with 1 GB RAM up to 56 vCPU with 224 GB RAM. Autoscaling is supported up to 16 vCPU. Larger computes are fixed size computes (no autoscaling). The `maintenance_work_mem` values shown below are approximate. | Compute Units (CU) | vCPU | RAM | maintenance_work_mem | | :----------------- | :--- | :----- | :------------------- | | 0.25 | 0.25 | 1 GB | 64 MB | | 0.50 | 0.50 | 2 GB | 64 MB | | 1 | 1 | 4 GB | 67 MB | | 2 | 2 | 8 GB | 134 MB | | 3 | 3 | 12 GB | 201 MB | | 4 | 4 | 16 GB | 268 MB | | 5 | 5 | 20 GB | 335 MB | | 6 | 6 | 24 GB | 402 MB | | 7 | 7 | 28 GB | 470 MB | | 8 | 8 | 32 GB | 537 MB | | 9 | 9 | 36 GB | 604 MB | | 10 | 10 | 40 GB | 671 MB | | 11 | 11 | 44 GB | 738 MB | | 12 | 12 | 48 GB | 805 MB | | 13 | 13 | 52 GB | 872 MB | | 14 | 14 | 56 GB | 939 MB | | 15 | 15 | 60 GB | 1007 MB | | 16 | 16 | 64 GB | 1074 MB | | 18 | 18 | 72 GB | 1208 MB | | 20 | 20 | 80 GB | 1342 MB | | 22 | 22 | 88 GB | 1476 MB | | 24 | 24 | 96 GB | 1610 MB | | 26 | 26 | 104 GB | 1744 MB | | 28 | 28 | 112 GB | 1878 MB | | 30 | 30 | 120 GB | 2012 MB | | 32 | 32 | 128 GB | 2146 MB | | 34 | 34 | 136 GB | 2280 MB | | 36 | 36 | 144 GB | 2414 MB | | 38 | 38 | 152 GB | 2548 MB | | 40 | 40 | 160 GB | 2682 MB | | 42 | 42 | 168 GB | 2816 MB | | 44 | 44 | 176 GB | 2950 MB | | 46 | 46 | 184 GB | 3084 MB | | 48 | 48 | 192 GB | 3218 MB | | 50 | 50 | 200 GB | 3352 MB | | 52 | 52 | 208 GB | 3486 MB | | 54 | 54 | 216 GB | 3620 MB | | 56 | 56 | 224 GB | 3754 MB | See [Edit a compute](https://neon.com/docs/manage/computes#edit-a-compute) to configure your compute size. Available compute sizes differ according to your Neon plan. To optimize `pgvector` index build time, you can increase the `maintenance_work_mem` setting for the current session beyond the preconfigured default shown in the table above with a command similar to this: ```sql SET maintenance_work_mem='10 GB'; ``` The recommended `maintenance_work_mem` setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size. ## Autoscaling You can also enable Neon's autoscaling feature for automatic scaling of compute resources (vCPU and RAM). Neon's _Autoscaling_ feature automatically scales up compute on demand in response to application workload and down to zero on inactivity. For example, if your AI application experiences heavy load during certain hours of the day or at different times throughout the week, month, or calendar year, Neon automatically scales compute resources without manual intervention according to the compute size boundaries that you configure. This enables you to handle peak demand while avoiding consuming compute resources during periods of low activity. Enabling autoscaling is also recommended for initial data loads and memory-intensive index builds to ensure sufficient compute resources for this phase of your AI application setup. To learn more about Neon's autoscaling feature and how to enable it, refer to our [Autoscaling guide](https://neon.com/docs/introduction/autoscaling). ## Storage On the Free plan, you get 0.5 GB of storage plus 0.5 GB of storage per branch. Storage on paid plans is usage based. See [Neon plans](https://neon.com/docs/introduction/plans) for details. ## Read replicas Neon supports read replicas, which are independent read-only computes designed to perform read operations on the same data as your primary read-write compute. Read replicas do not replicate data across database instances. Instead, read requests are directed to the same data source. This architecture enables read replicas to be created instantly, enabling you to scale out CPU and RAM, but because data is read from a single source, there are no additional storage costs. Since vector similarity search is a read-only workload, you can leverage read replicas to offload reads from your primary read-write compute to a dedicated compute when deploying AI applications. After you create a read replica, you can simply swap out your current Neon connecting string for the read replica connection string, which makes deploying a read replica for your AI application very simple. Neon's read replicas support the same compute sizes outlined above. Read replicas also support autoscaling. To learn more about the Neon read replicas, see [read replicas](https://neon.com/docs/introduction/read-replicas) and refer to our [Working with Neon read replicas](https://neon.com/docs/guides/read-replica-guide) guide. --- # Source: https://neon.com/llms/ai-ai-vector-search-optimization.txt # Optimize pgvector search > The document outlines methods for optimizing pgvector search in Neon, focusing on enhancing performance and efficiency in AI vector search operations. ## Source - [Optimize pgvector search HTML](https://neon.com/docs/ai/ai-vector-search-optimization): The original HTML version of this documentation This guide explores how to effectively use `pgvector` for vector similarity searches in your AI applications. We'll address the following key questions: 1. How to profile your vector search queries, when using `pgvector`? 2. When to use indexes and tradeoffs between the available options? 3. Which parameters to tune for best performance? We'll examine sequential scans, HNSW indexing, and IVFFlat indexing, providing benchmarks and practical recommendations for various dataset sizes. This will help you optimize `pgvector` queries in your Neon database for both accuracy and speed. Without indexes, `pgvector` performs a sequential scan on the database and calculates the distance between the query vector and all vectors in the table. This approach does an exact search and guarantees 100% **recall**, but it can be costly with large datasets. **Note** what is recall?: Recall is a metric used to evaluate the performance of a search algorithm. It measures how effectively the search retrieves relevant items from a dataset. It is defined as the ratio of the number of relevant items retrieved by the search to the total number of relevant items in the dataset. The query below uses `EXPLAIN ANALYZE` to generate an execution plan and display the performance of the similarity search query. ```sql EXPLAIN ANALYZE SELECT * FROM items ORDER BY embedding <-> '[0.011699999682605267,..., 0.008700000122189522]' LIMIT 100; ``` This is what the query plan looks like: ```sql Limit (cost=748.19..748.44 rows=100 width=173) (actual time=39.475..39.487 rows=100 loops=1) -> Sort (cost=748.19..773.19 rows=10000 width=173) (actual time=39.473..39.480 rows=100 loops=1) Sort Key: ((vec <-> '[0.0117,..., 0.0866]'::vector)) Sort Method: top-N heapsort Memory: 70kB -> Seq Scan on items (cost=0.00..366.00 rows=10000 width=173) (actual time=0.087..37.571 rows=10000 loops=1) Planning Time: 0.213 ms Execution Time: 39.527 ms ``` You can see in the plan that the query performs a sequential scan (`Seq Scan`) on the `items` table, which means that the query compares the query vector against all vectors in the `items` table. In other words, the query does not use an index. To understand how queries perform at scale, we tested sequential scan vector searches with `pgvector` on subsets of the [GIST-960 dataset](http://corpus-texmex.irisa.fr/) with 10k, 50k, 100k, 500k, and 1M rows using a Neon database instance with 4 vCPUs and 16 GB of RAM. The sequential scan search performed reasonably well for tables with 10k rows (~36ms). However, sequential scans start to become costly at 50k rows. So, when should you use sequential scans rather than defining an index? - When your dataset is small and you do not intend to scale it. - When you need 100% recall (accuracy). Adding indexes trades recall for performance. - When you do not expect a high volume of queries per second, which would require indexes for performance. Otherwise, consider adding an index for better performance. ## Indexing with HNSW HNSW is a graph-based approach to indexing multi-dimensional data. It constructs a multi-layered graph, where each layer is a subset of the previous one. During a vector similarity search, the algorithm navigates through the graph from the top layer to the bottom to quickly find the nearest neighbor. An HNSW graph is known for its superior performance in terms of speed and accuracy. **Note**: An HNSW index performs better than IVFFlat (in terms of speed-recall tradeoff) and can be created without any data in the table since there isn't a training step like there is for an IVFFlat index. However, HNSW indexes have slower build times and use more memory. The search process begins at the topmost layer of the HNSW graph. From the starting node, the algorithm navigates to the nearest neighbor in the same layer. The algorithm repeats this step until it can no longer find neighbors more similar to the query vector. Using the found node as an entry point, the algorithm moves down to the next layer in the graph and repeats the process of navigating to the nearest neighbor. The process of navigating to the nearest neighbor and moving down a layer is repeated until the algorithm reaches the bottom layer. In the bottom layer, the algorithm continues navigating to the nearest neighbor until it cannot find any nodes that are more similar to the query vector. The current node is then returned as the most similar node to the query vector. The key idea behind HNSW is that by starting the search at the top layer and moving down through each layer, the algorithm can quickly navigate to the area of the graph that contains the node that is most similar to the query vector. This makes the search process much faster than if it had to search through every node in the graph. ### Tuning the HNSW algorithm The following options allow you to tune the HNSW algorithm when creating an index: - `m`: Defines the maximum number of links created for each node during graph construction. A higher value increases accuracy (recall), but it also increases the size of the index in memory and index construction time. Higher values are typically used with high-dimensionality datasets or when a high degree of accuracy is required. The default value is 16. Acceptable values for m typically fall between 2 and 100. For many applications, beginning with a range of 12 to 48 is advisable. - `ef_construction`: Defines the size of the list for the nearest neighbors. This value influences the tradeoff between index quality and construction speed. A high `ef_construction` value creates a higher quality graph, enabling more accurate search results but also means that index construction takes longer. The value should be set to at least twice the value of `m`. The default setting is 64. There comes a point where increasing `ef_construction` no longer improves index quality. To evaluate search accuracy, you can start by setting `ef_construction` equal to `ef_search` and incrementally increasing `ef_construction` to achieve the desired result. If accuracy is lower than 0.9, there may be opportunity for improvement by increasing `ef_construction`. This example demonstrates how to set the parameters: ```sql CREATE INDEX ON items USING hnsw (embedding vector_l2_ops) WITH (m = 16, ef_construction = 64); ``` HNSW search tuning: - `ef_search`: Defines the size of the dynamic candidate list for search. The default value is 40. This value influences the trade-off between query accuracy (recall) and speed. A higher value increases accuracy at the cost of speed. The value should be equal to or larger than `k`, which is the number of nearest neighbors you want your search to return (defined by the `LIMIT` clause in your `SELECT` query). To configure this value, do so using a `SET` statement before executing queries: ```sql SET hnsw.ef_search = 100; ``` You can also use `SET LOCAL` inside a transaction to set it for a single query: ```sql BEGIN; SET LOCAL hnsw.ef_search = 100; SELECT ... COMMIT; ``` In summary: - To prioritize search speed over accuracy, use lower values for `m` and `ef_search`. - Conversely, to prioritize accuracy over search speed, use a higher value for `m` and `ef_search`. - Using a higher value for `ef_construction` yields more accurate search results at the cost of index build time. ## Indexing with IVFFlat IVFFlat indexes partition the dataset into clusters ("lists") to optimize for vector search. You can create an IVFFlat index using the query below: ```sql CREATE INDEX items_embedding_cosine_idx ON items USING ivfflat (embedding vector_l2_ops) WITH (lists = 1000); ``` IVFFlat in `pgvector` has two parameters: 1. `lists` - This parameter specifies the number of [k-means clusters](https://en.wikipedia.org/wiki/K-means_clustering) (or "lists") to divide the dataset into - Each cluster contains a subset of the data, and each data point belongs to the closest cluster centroid. 2. `probes` - This parameter determines the number of lists to explore during the search for the nearest neighbors. - By probing multiple lists, the search algorithm can find the closest points more accurately, balancing between speed and accuracy. By default, the `probes` parameter is set to `1`. This means that during a search, only one cluster is explored. This approach is fine if your query vector is close to the centroid. However, if the query vector is located near the edge of the cluster, closer neighbors in adjacent clusters will not be included in the search, which can result in a lower recall. You must specify the number of probes in the same connection as the search query: ```sql SET ivfflat.probes = 100; SET enable_seqscan=off; SELECT * FROM items ORDER BY embedding <-> '[0.011699999682605267,..., 0.008700000122189522]' LIMIT 100; ``` **Note**: In the example above, `enable_seqscan=off` forces Postgres to use index scans. The output of this query appears as follows: ```sql Limit (cost=1971.50..1982.39 rows=100 width=173) (actual time=4.500..5.738 rows=100 loops=1) -> Index Scan using items_embedding_idx on vectors (cost=1971.50..3060.50 rows=10000 width=173) (actual time=4.499..5.726 rows=100 loops=1) Order By: (vec <-> '[0.0117, ... ,0.0866]'::vector) Planning Time: 0.295 ms Execution Time: 5.867 ms ``` We've experimented with `lists` equal to 1000, 2000, and 4000, and `probes` equal to 1, 2, 10, 50, 100, 200. Although there is a substantial gain in recall for increasing the number of `probes`, you will reach a point of diminishing returns when recall plateaus and execution time increases. Therefore, we encourage experimenting with different values for `probes` and `lists` to achieve optimal search performance for your queries. Good places to start are: - Using a `lists` size equal to rows / 1000 for tables with up to 1 million rows, and `sqrt(rows)` for larger datasets. - Start with a `probes` value equal to lists / 10 for tables up to 1 million rows, and `sqrt(lists)` for larger datasets. ## Conclusion The sequential scan approach of `pgvector` performs well for small datasets but can be costly for larger ones. Use sequential scans if you require 100% accuracy, but expect performance issues with higher volumes of queries per second. You can optimize searches using HNSW or IVFFlat indexes for approximate nearest neighbor (ANN) search, but HNSW indexes have better query performance than IVFFlat with build time and memory usage tradeoffs. Be sure to test different index tuning parameter settings to find the right balance between speed and accuracy for your specific use case and dataset. --- # Source: https://neon.com/llms/ai-connect-mcp-clients-to-neon.txt # Connect MCP clients to Neon > The document "Connect MCP Clients to Neon" outlines the steps for configuring and establishing a connection between MCP clients and the Neon database, detailing necessary settings and authentication procedures specific to Neon's environment. ## Source - [Connect MCP clients to Neon HTML](https://neon.com/docs/ai/connect-mcp-clients-to-neon): The original HTML version of this documentation The **Neon MCP Server** allows you to connect various [**Model Context Protocol (MCP)**](https://modelcontextprotocol.org) compatible AI tools to your Neon Postgres databases. This guide provides instructions for connecting popular MCP clients to the Neon MCP Server, enabling natural language interaction with your Neon projects. This guide covers the setup for the following MCP clients: - [Claude Desktop](https://neon.com/docs/ai/connect-mcp-clients-to-neon#claude-desktop) - [Claude Code](https://neon.com/docs/ai/connect-mcp-clients-to-neon#claude-code) - [Cursor](https://neon.com/docs/ai/connect-mcp-clients-to-neon#cursor) - [Windsurf (Codeium)](https://neon.com/docs/ai/connect-mcp-clients-to-neon#windsurf-codeium) - [Cline (VS Code extension)](https://neon.com/docs/ai/connect-mcp-clients-to-neon#cline-vs-code-extension) - [Zed](https://neon.com/docs/ai/connect-mcp-clients-to-neon#zed) - [VS Code (with GitHub Copilot)](https://neon.com/docs/ai/connect-mcp-clients-to-neon#vs-code-with-github-copilot) - [ChatGPT](https://neon.com/docs/ai/connect-mcp-clients-to-neon#chatgpt) By connecting these tools to the Neon MCP Server, you can manage your Neon projects, databases, and schemas using natural language commands within the MCP client interface. **Important** Neon MCP Server Security Considerations: The Neon MCP Server grants powerful database management capabilities through natural language requests. **Always review and authorize actions requested by the LLM before execution.** Ensure that only authorized users and applications have access to the Neon MCP Server. The Neon MCP Server is intended for local development and IDE integrations only. **We do not recommend using the Neon MCP Server in production environments.** It can execute powerful operations that may lead to accidental or unauthorized changes. For more information, see [MCP security guidance →](https://neon.com/docs/ai/neon-mcp-server#mcp-security-guidance). ## Prerequisites - An MCP Client application. - A [Neon account](https://console.neon.tech/signup). - **Node.js (>= v18.0.0) and npm:** Download from [nodejs.org](https://nodejs.org). For Local MCP Server setup, you also need a Neon API key. See [Neon API Keys documentation](https://neon.com/docs/manage/api-keys#creating-api-keys). **Note**: Ensure you are using the latest version of your chosen MCP client as MCP integration may not be available in older versions. If you are using an older version, update your MCP client to the latest version. ## Connect to Neon MCP Server You can connect to Neon MCP Server in two ways: 1. **Remote MCP Server (Preview):** Connect to Neon's managed remote MCP server using OAuth or a Neon API key. 2. **Local MCP Server:** Install and run the Neon MCP server locally, using a Neon API key. ## Claude Desktop Tab: Remote MCP Server 1. Open Claude desktop and navigate to **Settings**. 2. Under the **Developer** tab, click **Edit Config** (On Windows, it's under File -> Settings -> Developer -> Edit Config) to open the configuration file (`claude_desktop_config.json`). 3. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/mcp"] } } } ``` > To use SSE instead of streamable HTTP responses, you can specify the `https://mcp.neon.tech/sse` endpoint instead of `https://mcp.neon.tech/mcp`. 4. Save the configuration file and **restart** Claude Desktop. 5. An OAuth window will open in your browser. Follow the prompts to authorize Claude Desktop to access your Neon account. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open your terminal. 2. Run the following command, replacing `YOUR_NEON_API_KEY` with your actual Neon API key: ```bash npx @neondatabase/mcp-server-neon init YOUR_NEON_API_KEY ``` 3. Restart Claude Desktop. For more, see [Get started with Neon MCP server with Claude Desktop](https://neon.com/guides/neon-mcp-server). ## Claude Code Tab: Remote MCP Server 1. Ensure you have Claude Code installed. Visit [docs.anthropic.com/en/docs/claude-code](https://docs.anthropic.com/en/docs/claude-code) for installation instructions. 2. Open terminal and add Neon MCP with ```sh claude mcp add --transport http neon https://mcp.neon.tech/mcp ``` 3. Start a new session of `claude` to trigger OAuth authentication flow 4. You can also trigger authentication with `/mcp` within Claude Code. If you prefer to authenticate using a Neon API key, provide `Authorization` header to `mcp add` command: ``` claude mcp add --transport http neon https://mcp.neon.tech/mcp \ --header "Authorization: Bearer " ``` > Replace `` with your actual Neon API key which you obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section Tab: Local MCP Server 1. Ensure you have Claude Code installed. Visit [docs.anthropic.com/en/docs/claude-code](https://docs.anthropic.com/en/docs/claude-code) for installation instructions. 2. Open terminal and add Neon MCP with ```sh claude mcp add neon -- npx -y @neondatabase/mcp-server-neon start " Replace `` with your actual Neon API key which you obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section 3. Start new Claude Code session with `claude` command and start using Neon MCP ## Cursor Tab: Remote MCP Server ### Quick Install (Recommended) Click the button below to install the Neon MCP server in Cursor. When prompted, click **Install** within Cursor. Add Neon MCP server to Cursor ### Manual Setup 1. Open Cursor. Create a `.cursor` directory in your project root if needed. 2. Create or open the `mcp.json` file in the `.cursor` directory. 3. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "url": "https://mcp.neon.tech/mcp", "headers": {} } } } ``` > To use SSE instead of streamable HTTP responses, you can specify the `https://mcp.neon.tech/sse` endpoint instead of `https://mcp.neon.tech/mcp`. 4. Save the configuration file. Cursor may detect the change or require a restart. 5. An OAuth window will open in your browser. Follow the prompts to authorize Cursor to access your Neon account. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open Cursor. Create a `.cursor` directory in your project root if needed. 2. Create or open the `mcp.json` file in the `.cursor` directory. 3. Add the "Neon" server entry within the `mcpServers` object. Replace `` with your actual Neon API key which you obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` 4. Save the configuration file. Cursor may detect the change or require a restart. For more, see [Get started with Cursor and Neon Postgres MCP Server](https://neon.com/guides/cursor-mcp-neon). ## Windsurf (Codeium) Tab: Remote MCP Server 1. Open Windsurf and navigate to the Cascade assistant sidebar. 2. Click the hammer (MCP) icon, then **Configure** which opens up the "Manage MCPs" configuration file. 3. Click on "View raw config" to open the raw configuration file in Windsurf. 4. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/mcp"] } } } ``` > To use SSE instead of streamable HTTP responses, you can specify the `https://mcp.neon.tech/sse` endpoint instead of `https://mcp.neon.tech/mcp`. 5. Save the file. 6. Click the **Refresh** button in the Cascade sidebar next to "available MCP servers". 7. An OAuth window will open in your browser. Follow the prompts to authorize Windsurf to access your Neon account. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open Windsurf and navigate to the Cascade assistant sidebar. 2. Click the hammer (MCP) icon, then **Configure** which opens up the "Manage MCPs" configuration file. 3. Click on "View raw config" to open the raw configuration file in Windsurf. 4. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` > Replace `` with your actual Neon API key which you obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section. 5. Save the file. 6. Click the **Refresh** button in the Cascade sidebar next to "available MCP servers". For more, see [Get started with Windsurf and Neon Postgres MCP Server](https://neon.com/guides/windsurf-mcp-neon). ## Cline (VS Code Extension) Tab: Remote MCP Server 1. Open Cline in VS Code (Sidebar -> Cline icon). 2. Click **MCP Servers** Icon -> **Installed** -> **Configure MCP Servers** to open the configuration file. 3. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/sse"] } } } ``` > For [streamable HTTP responses](https://neon.com/docs/ai/connect-mcp-clients-to-neon#streamable-http-support) instead of SSE, you can specify the `https://mcp.neon.tech/mcp` endpoint instead of `https://mcp.neon.tech/sse`. 4. Save the file. Cline should reload the configuration automatically. 5. An OAuth window will open in your browser. Follow the prompts to authorize Cline to access your Neon account. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open Cline in VS Code (Sidebar -> Cline icon). 2. Click **MCP Servers** Icon -> **Installed** -> **Configure MCP Servers** to open the configuration file. 3. Add the "Neon" server entry within the `mcpServers` object: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` > Replace `` with your actual Neon API key which you obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section. 4. Save the file. Cline should reload the configuration automatically. For more, see [Get started with Cline and Neon Postgres MCP Server](https://neon.com/guides/cline-mcp-neon). ## Zed **Note**: MCP support in Zed is currently in **preview**. Ensure you're using the Preview version of Zed to add MCP servers (called **Context Servers** in Zed). Download the preview version from [zed.dev/releases/preview](https://zed.dev/releases/preview). Tab: Remote MCP Server 1. Open the Zed Preview application. 2. Click the Assistant (✨) icon in the bottom right corner. 3. Click **Settings** in the top right panel of the Assistant. 4. In the **Context Servers** section, click **+ Add Context Server**. 5. Configure the Neon Server: - Enter **Neon** in the **Name** field. - In the **Command** field, enter: ```bash npx -y mcp-remote https://mcp.neon.tech/sse ``` - Click **Add Server**. > For [streamable HTTP responses](https://neon.com/docs/ai/connect-mcp-clients-to-neon#streamable-http-support) instead of SSE, you can specify the `https://mcp.neon.tech/mcp` endpoint instead of `https://mcp.neon.tech/sse`. 6. An OAuth window will open in your browser. Follow the prompts to authorize Zed to access your Neon account. 7. Check the Context Servers section in Zed settings to ensure the connection is successful. "Neon" should be listed. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open the Zed Preview application. 2. Click the Assistant (✨) icon in the bottom right corner. 3. Click **Settings** in the top right panel of the Assistant. 4. In the **Context Servers** section, click **+ Add Context Server**. 5. Configure the Neon Server: - Enter **Neon** in the **Name** field. - In the **Command** field, enter the following, replacing `` with your actual Neon API key obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section: ```bash npx -y @neondatabase/mcp-server-neon start ``` - Click **Add Server**. 6. Check the Context Servers section in Zed settings to ensure the connection is successful. "Neon" should be listed. For more details, including workflow examples and troubleshooting, see [Get started with Zed and Neon Postgres MCP Server](https://neon.com/guides/zed-mcp-neon). ## VS Code (with GitHub Copilot) **Note**: To use MCP servers with VS Code, you need [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) and [GitHub Copilot Chat](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) extensions installed Tab: Remote MCP Server 1. Open VS Code. 2. Create a `.vscode` folder in your project's root directory if it doesn't exist. 3. Create or open the `mcp.json` file in the `.vscode` directory and add the following configuration into the file (if you have other MCP servers configured, add the "Neon" server entry within the `servers` object): ```json { "servers": { "Neon": { "url": "https://mcp.neon.tech/mcp", "type": "http" } }, "inputs": [] } ``` > For [streamable HTTP responses](https://neon.com/docs/ai/connect-mcp-clients-to-neon#streamable-http-support) instead of SSE, you can specify the `https://mcp.neon.tech/mcp` endpoint instead of `https://mcp.neon.tech/sse`. 4. Save the `mcp.json` file. 5. Click on Start on the MCP server. 6. An OAuth window will open in your browser. Follow the prompts to authorize VS Code (GitHub Copilot) to access your Neon account. 7. Once authorized, you can now open GitHub Copilot Chat in VS Code and [switch to Agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode). You will see the Neon MCP Server listed among the available tools. > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). Tab: Local MCP Server 1. Open VS Code. 2. Open your [User Settings (JSON) file](https://code.visualstudio.com/docs/copilot/chat/mcp-servers#_add-an-mcp-server-to-your-user-settings): Use the command palette (`Ctrl+Shift+P` or `Cmd+Shift+P`) and search for "Preferences: Open User Settings (JSON)". 3. Add the Neon MCP server configuration to your `settings.json` file. If the `"mcp.servers"` object doesn't exist, create it: ```json { // ... your other settings ... "mcp": { "servers": { "Neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } } } // ... } ``` 4. Save the `settings.json` file. 5. Click on Start on the MCP server. 6. You can now open GitHub Copilot Chat in VS Code and [switch to Agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode). You will see the Neon MCP Server listed among the available tools. For detailed instructions on utilizing the Neon MCP server with GitHub Copilot in VS Code, including a step-by-step example on generating an Azure Function REST API, refer to [How to Use Neon MCP Server with GitHub Copilot in VS Code](https://neon.com/guides/neon-mcp-server-github-copilot-vs-code). ## ChatGPT You can connect ChatGPT to the Neon MCP Server using custom MCP connectors. This integration extends ChatGPT with Neon's database capabilities so you can query, manage, and interact with your Neon projects directly within ChatGPT. To connect ChatGPT to the Neon MCP Server, you need to first enable ChatGPT's developer mode, then add the Neon MCP Server as a custom connector. This makes the connector available for your account; you'll need to separately enable both developer mode and Neon as a source for any chat where you want to use Neon. 1. **Add MCP server to ChatGPT** In your ChatGPT account settings, go to **Settings** → **Connectors** → **Advanced Settings** and enable **Developer mode**. Still on the Connectors tab, you can then **create** a Neon connection from the **Browse connectors** section. Use the following URL: ```bash https://mcp.neon.tech/mcp ``` Make sure you choose **OAuth** for authentication and check "I trust this application", then complete the authorization flow when prompted.
2. **Enable Neon per chat** In each chat where you want to use Neon, click the **+** button and enable Developer Mode for that chat. Under **Add sources**, you can then enable the Neon connector you just created. Once connected, you can use natural language to manage your Neon databases directly in ChatGPT. ## Other MCP clients Adapt the instructions above for other clients: - **Remote MCP server:** Add the following JSON configuration within the `mcpServers` section of your client's `MCP` configuration file: > By default, the Remote MCP Server connects to your personal Neon account. To connect to an organization's account, you must authenticate with an API key. If your client supports it, provide the key in the `Authorization` header. For more information, see [API key-based authentication](https://neon.com/docs/ai/neon-mcp-server#api-key-based-authentication). ```json "neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/mcp"] } ``` > MCP supports two remote server transports: the deprecated Server-Sent Events (SSE) and the newer, recommended Streamable HTTP. If your LLM client doesn't support Streamable HTTP yet, you can switch the endpoint from `https://mcp.neon.tech/mcp` to `https://mcp.neon.tech/sse` to use SSE instead. Then follow the OAuth flow on first connection. - **Local MCP server:** Add the following JSON configuration within the `mcpServers` section of your client's `MCP` configuration file, replacing `` with your actual Neon API key obtained from the [prerequisites](https://neon.com/docs/ai/connect-mcp-clients-to-neon#prerequisites) section: Tab: MacOS/Linux For **MacOS and Linux**, add the following JSON configuration within the `mcpServers` section of your client's `mcp_config` file, replacing `` with your actual Neon API key: ```json "neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } ``` Tab: Windows For **Windows**, add the following JSON configuration within the `mcpServers` section of your client's `mcp_config` file, replacing `` with your actual Neon API key: ```json "neon": { "command": "cmd", "args": ["/c", "npx", "-y", "@neondatabase/mcp-server-neon", "start", ""] } ``` Tab: Windows (WSL) For **Windows Subsystem for Linux (WSL)**, add the following JSON configuration within the `mcpServers` section of your client's `mcp_config` file, replacing `` with your actual Neon API key: ```json "neon": { "command": "wsl", "args": ["npx", "-y", "@neondatabase/mcp-server-neon", "start", ""] } ``` **Note**: After successful configuration, you should see the Neon MCP Server listed as active in your MCP client's settings or tool list. You can enter "List my Neon projects" in the MCP client to see your Neon projects and verify the connection. ## Troubleshooting ### Configuration Issues If your client does not use `JSON` for configuration of MCP servers (such as older versions of Cursor), you can use the following command when prompted: ```bash # For Remote MCP server npx -y mcp-remote https://mcp.neon.tech/mcp # For Local MCP server npx -y @neondatabase/mcp-server-neon start ``` ### OAuth Authentication Errors When using the remote MCP server with OAuth authentication, you might encounter the following error: ``` {"code":"invalid_request","error":"invalid redirect uri"} ``` This typically occurs when there are issues with cached OAuth credentials. To resolve this: 1. Remove the MCP authentication cache directory: ```bash rm -rf ~/.mcp-auth ``` 2. Restart your MCP client application 3. The OAuth flow will start fresh, allowing you to properly authenticate This error is most common when using the remote MCP server option and can occur after OAuth configuration changes or when cached credentials become invalid. ## Next steps Once connected, you can start interacting with your Neon Postgres databases using natural language commands within your chosen MCP client. Explore the [Supported Actions (Tools)](https://neon.com/docs/ai/neon-mcp-server#supported-actions-tools) of the Neon MCP Server to understand the available functionalities. ## Resources - [MCP Protocol](https://modelcontextprotocol.org) - [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) - [Neon API Keys](https://neon.com/docs/manage/api-keys#creating-api-keys) - [Neon MCP server GitHub](https://github.com/neondatabase/mcp-server-neon) - [VS Code MCP Server Documentation](https://code.visualstudio.com/docs/copilot/chat/mcp-servers) --- # Source: https://neon.com/llms/ai-inngest.txt # Inngest > The Inngest documentation for Neon outlines how to integrate and utilize Inngest for building serverless workflows, detailing setup, configuration, and execution processes specific to Neon's platform. ## Source - [Inngest HTML](https://neon.com/docs/ai/inngest): The original HTML version of this documentation Inngest is a popular framework for building AI RAG and Agentic workflows. [Inngest](https://www.inngest.com/?utm_source=neon&utm_medium=inngest-ai-integration) provides automatic retries, caching along with concurrency and throttling management and AI requests offloading. Inngest also integrates with Neon Postgres to trigger workflows based on database changes. ## Build RAG with `step.run()` Inngest provides a `step.run()` API that allows you to compose your workflows into cacheable, retryable, and concurrency-safe steps: In the above workflow, a network issue prevented the AI workflow to connect to the vector store. Fortunately, Inngest retries the failed step and uses the cached results from the previous steps, avoiding an unnecessary additional OpenAI call. This workflow translates to the following code: ```typescript import { inngest } from '@/inngest'; import { getToolsForMessage, vectorSearch } from '@/helpers'; export const ragWorkflow = client.createFunction( { id: 'rag-workflow', concurrency: 10 }, { event: 'chat.message' }, async ({ event, step }) => { const { message } = event.data; const page = await step.run('tools.search', async () => { // Calls OpenAI return getToolsForMessage(message); }); await step.run('vector-search', async () => { // Search in Neon's vector store return vectorSearch(page); }); // step 3 and 4... } ); ``` Configuring [concurrency](https://www.inngest.com/docs/guides/concurrency?utm_source=neon&utm_medium=inngest-ai-integration) or [throttling](https://www.inngest.com/docs/guides/throttling?utm_source=neon&utm_medium=inngest-ai-integration) to match your LLM provider's limits is achieved with a single line of code. Learn more about using Inngest for RAG in the following article: [Multi-Tenant RAG With One Neon Project Per User](https://neon.com/blog/multi-tenant-rag). ## AI requests offloading: `step.ai.infer()` Inngest also provides a `step.ai.infer()` API that offloads AI requests. By using `step.ai.infer()` your AI workflows will pause while waiting for the slow LLM response, avoiding unnecessary compute use on Serverless environments: The previous RAG workflow can be rewritten to use [`step.ai.infer()`](https://www.inngest.com/docs/features/inngest-functions/steps-workflows/step-ai-orchestration?utm_source=neon&utm_medium=inngest-ai-integration#step-ai-infer) to offload the AI request to the LLM provider: ```typescript import { inngest } from '@/inngest'; import { getPromptForToolsSearch, vectorSearch } from '@/helpers'; export const ragWorkflow = client.createFunction( { id: 'rag-workflow', concurrency: 10 }, { event: 'chat.message' }, async ({ event, step }) => { const { message } = event.data; const prompt = getPromptForToolsSearch(message); await step.ai.infer('tools.search', { model: openai({ model: 'gpt-4o' }), body: { messages: prompt, }, }); // other steps... } ); ``` `step.ai.infer()`, combined with Neon's Scale-to-zero feature, allows you to build AI workflows that scale costs with its success! Learn more about using `step.ai.infer()` in the following article: [step.ai: Build Serverless AI Applications That Won't Break the Bank](https://www.inngest.com/blog/step-ai-for-serverless-ai-applications?utm_source=neon&utm_medium=inngest-ai-integration). ## Trigger AI workflows based on database changes Inngest also integrates with Neon Postgres to trigger AI workflows based on database changes: This integration allows you to trigger AI workflows based on database changes, such as generating embeddings as soon as a new row is inserted into a table (see example below). Configure the Inngests Neon integration to trigger AI workflows from your Neon database changes [by following this guide](https://neon.com/docs/guides/trigger-serverless-functions). ## Starter apps Hackable, fully-featured, pre-built [starter apps](https://github.com/neondatabase/examples/tree/main/ai/inngest) to get you up and running with Inngest and Postgres. - [RAG starter (OpenAI + Inngest)](https://github.com/neondatabase/examples/tree/main/ai/inngest/rag-starter-nextjs): A Next.js RAG starter app built with OpenAI and Inngest - [multi-tenant RAG (OpenAI + Inngest)](https://github.com/inngest/multi-tenant-rag-example): A Next.js contacts importer multi-tenant RAG built with OpenAI and Inngest - [Auto-embedding (OpenAI + Inngest)](https://github.com/neondatabase/examples/tree/main/ai/inngest/auto-embeddings-nextjs): A Next.js app example of auto-embedding with Inngest --- # Source: https://neon.com/llms/ai-langchain.txt # LangChain > The LangChain documentation for Neon outlines how to integrate and utilize LangChain with Neon's AI capabilities, detailing setup instructions and code examples for efficient data management and processing within the Neon environment. ## Source - [LangChain HTML](https://neon.com/docs/ai/langchain): The original HTML version of this documentation LangChain is a popular framework for working with AI, Vectors, and embeddings. LangChain supports using Neon as a vector store, using the `pgvector` extension. ## Initialize Postgres Vector Store LangChain simplifies the complexity of managing document insertion and embeddings generation using vector stores by providing streamlined methods for these tasks. Here's how you can initialize Postgres Vector with LangChain: ```tsx // File: vectorStore.ts import { NeonPostgres } from '@langchain/community/vectorstores/neon'; import { OpenAIEmbeddings } from '@langchain/openai'; const embeddings = new OpenAIEmbeddings({ dimensions: 512, model: 'text-embedding-3-small', }); export async function loadVectorStore() { return await NeonPostgres.initialize(embeddings, { connectionString: process.env.POSTGRES_URL as string, }); } // Use in your code (say, in API routes) const vectorStore = await loadVectorStore(); ``` ## Generate Embeddings with OpenAI LangChain handles embedding generation internally while adding vectors to the Postgres database, simplifying the process for users. For more detailed control over embeddings, refer to the respective [JavaScript](https://js.langchain.com/v0.2/docs/integrations/text_embedding/openai#specifying-dimensions) and [Python](https://python.langchain.com/v0.2/docs/how_to/embed_text/#embed_query) documentation. ## Stream Chat Completions with OpenAI LangChain can find similar documents to the user's latest query and invoke the OpenAI API to power [chat completion](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) responses, providing a seamless integration for creating dynamic interactions. Here's how you can power chat completions in an API route: ```tsx import { loadVectorStore } from './vectorStore'; import { pull } from 'langchain/hub'; import { ChatOpenAI } from '@langchain/openai'; import { createRetrievalChain } from 'langchain/chains/retrieval'; import type { ChatPromptTemplate } from '@langchain/core/prompts'; import { AIMessage, HumanMessage } from '@langchain/core/messages'; import { createStuffDocumentsChain } from 'langchain/chains/combine_documents'; const topK = 3; export async function POST(request: Request) { const llm = new ChatOpenAI(); const encoder = new TextEncoder(); const vectorStore = await loadVectorStore(); const { messages = [] } = await request.json(); const userMessages = messages.filter((i) => i.role === 'user'); const input = userMessages[userMessages.length - 1].content; const retrievalQAChatPrompt = await pull('langchain-ai/retrieval-qa-chat'); const retriever = vectorStore.asRetriever({ k: topK, searchType: 'similarity' }); const combineDocsChain = await createStuffDocumentsChain({ llm, prompt: retrievalQAChatPrompt, }); const retrievalChain = await createRetrievalChain({ retriever, combineDocsChain, }); const customReadable = new ReadableStream({ async start(controller) { const stream = await retrievalChain.stream({ input, chat_history: messages.map((i) => i.role === 'user' ? new HumanMessage(i.content) : new AIMessage(i.content) ), }); for await (const chunk of stream) { controller.enqueue(encoder.encode(chunk.answer)); } controller.close(); }, }); return new Response(customReadable, { headers: { Connection: 'keep-alive', 'Content-Encoding': 'none', 'Cache-Control': 'no-cache, no-transform', 'Content-Type': 'text/plain; charset=utf-8', }, }); } ``` ## Starter apps Hackable, fully-featured, pre-built [starter apps](https://github.com/neondatabase/examples/tree/main/ai/llamaindex) to get you up and running with LlamaIndex and Postgres. - [AI chatbot (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/chatbot-nextjs): A Next.js AI chatbot starter app built with OpenAI and LangChain - [RAG chatbot (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/rag-nextjs): A Next.js RAG chatbot starter app built with OpenAI and LangChain - [Semantic search chatbot (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/semantic-search-nextjs): A Next.js Semantic Search chatbot starter app built with OpenAI and LangChain - [Chat with PDF (OpenAI + LangChain)](https://github.com/neondatabase/examples/tree/main/ai/langchain/chat-with-pdf-nextjs): A Next.js Chat with PDF chatbot starter app built with OpenAI and LangChain --- # Source: https://neon.com/llms/ai-llamaindex.txt # LlamaIndex > The LlamaIndex documentation for Neon outlines how to integrate and utilize LlamaIndex for managing and querying large datasets within Neon's AI infrastructure. ## Source - [LlamaIndex HTML](https://neon.com/docs/ai/llamaindex): The original HTML version of this documentation LlamaIndex is a popular framework for working with AI, Vectors, and embeddings. LlamaIndex supports using Neon as a vector store, using the `pgvector` extension. ## Initialize Postgres Vector Store LlamaIndex simplifies the complexity of managing document insertion and embeddings generation using vector stores by providing streamlined methods for these tasks. Here's how you can initialize Postgres Vector with LlamaIndex: ```tsx // File: vectorStore.ts import { OpenAIEmbedding, Settings } from 'llamaindex'; import { PGVectorStore } from 'llamaindex/storage/vectorStore/PGVectorStore'; Settings.embedModel = new OpenAIEmbedding({ dimensions: 512, model: 'text-embedding-3-small', }); const vectorStore = new PGVectorStore({ dimensions: 512, connectionString: process.env.POSTGRES_URL, }); export default vectorStore; // Use in your code (say, in API routes) const index = await VectorStoreIndex.fromVectorStore(vectorStore); ``` ## Generate Embeddings with OpenAI LlamaIndex handles embedding generation internally while adding vectors to the Postgres database, simplifying the process for users. For more detailed control over embeddings, refer to the respective [JavaScript](https://ts.llamaindex.ai/docs/llamaindex/modules/models/embeddings/openai) and [Python](https://docs.llamaindex.ai/en/stable/examples/embeddings/OpenAI) documentation. ## Stream Chat Completions with OpenAI LlamaIndex can find similar documents to the user's latest query and invoke the OpenAI API to power [chat completion](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) responses, providing a seamless integration for creating dynamic interactions. Here's how you can power chat completions in an API route: ```tsx import vectorStore from './vectorStore'; import { ContextChatEngine, VectorStoreIndex } from 'llamaindex'; interface Message { role: 'user' | 'assistant' | 'system' | 'memory'; content: string; } export async function POST(request: Request) { const encoder = new TextEncoder(); const { messages = [] } = (await request.json()) as { messages: Message[] }; const userMessages = messages.filter((i) => i.role === 'user'); const query = userMessages[userMessages.length - 1].content; const index = await VectorStoreIndex.fromVectorStore(vectorStore); const retriever = index.asRetriever(); const chatEngine = new ContextChatEngine({ retriever }); const customReadable = new ReadableStream({ async start(controller) { const stream = await chatEngine.chat({ message: query, chatHistory: messages, stream: true }); for await (const chunk of stream) { controller.enqueue(encoder.encode(chunk.response)); } controller.close(); }, }); return new Response(customReadable, { headers: { Connection: 'keep-alive', 'Content-Encoding': 'none', 'Cache-Control': 'no-cache, no-transform', 'Content-Type': 'text/plain; charset=utf-8', }, }); } ``` ## Starter apps Hackable, fully-featured, pre-built [starter apps](https://github.com/neondatabase/examples/tree/main/ai/llamaindex) to get you up and running with LlamaIndex and Postgres. - [AI chatbot (OpenAI + LllamIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/chatbot-nextjs): A Next.js AI chatbot starter app built with OpenAI and LlamaIndex - [RAG chatbot (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/rag-nextjs): A Next.js RAG chatbot starter app built with OpenAI and LlamaIndex - [Semantic search chatbot (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/semantic-search-nextjs): A Next.js Semantic Search chatbot starter app built with OpenAI and LlamaIndex - [Reverse image search (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/reverse-image-search-nextjs): A Next.js Reverse Image Search Engine starter app built with OpenAI and LlamaIndex - [Chat with PDF (OpenAI + LlamaIndex)](https://github.com/neondatabase/examples/tree/main/ai/llamaindex/chat-with-pdf-nextjs): A Next.js Chat with PDF chatbot starter app built with OpenAI and LlamaIndex --- # Source: https://neon.com/llms/ai-neon-mcp-server.txt # Neon MCP Server overview > The Neon MCP Server documentation details the setup and configuration of the Neon Management Control Plane Server, enabling users to manage and control Neon database instances effectively. ## Source - [Neon MCP Server overview HTML](https://neon.com/docs/ai/neon-mcp-server): The original HTML version of this documentation The **Neon MCP Server** is an open-source tool that lets you interact with your Neon Postgres databases in **natural language**. > To get started connecting an MCP Client like **Cursor**, **Claude Code**, **VS Code**, **Windsurf**, **ChatGPT**, and others, see [Connect MCP clients](https://neon.com/docs/ai/connect-mcp-clients-to-neon). If you're using **Cursor**, you can click the button below for a quick install. When prompted, click **Install within Cursor**. Add Neon MCP server to Cursor Imagine you want to create a new database. Instead of using the Neon Console or API, you could just type a request like, "Create a database named 'my-new-database'". Or, to see your projects, you might ask, "List all my Neon projects". The Neon MCP Server makes this possible. It works by acting as a bridge between natural language requests and the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Built upon the [Model Context Protocol (MCP)](https://modelcontextprotocol.org), it translates your requests into the necessary Neon API calls, allowing you to manage everything from creating projects and branches to running queries and performing database migrations. **Important** Neon MCP Server Security Considerations: The Neon MCP Server grants powerful database management capabilities through natural language requests. **Always review and authorize actions requested by the LLM before execution.** Ensure that only authorized users and applications have access to the Neon MCP Server. The Neon MCP Server is intended for local development and IDE integrations only. **We do not recommend using the Neon MCP Server in production environments.** It can execute powerful operations that may lead to accidental or unauthorized changes. For more information, see [MCP security guidance →](https://neon.com/docs/ai/neon-mcp-server#mcp-security-guidance). ## Understanding MCP and Neon MCP Server The [**Model Context Protocol (MCP)**](https://modelcontextprotocol.org) standardizes communication between LLMs and external tools. It defines a client-server architecture, enabling LLMs (Hosts) to connect to specialized servers that provide context and tools for interacting with external systems. The key components of the MCP architecture are: - **Hosts**: These are AI applications, such as Claude Desktop, Claude Code, or IDEs like Cursor, that initiate connections to MCP servers - **Clients**: These reside within the host application and maintain one-to-one connections with individual MCP servers - **Server**: These programs, such as Neon's MCP Server, provide context, tools, and prompts to clients, enabling access to external data and functionalities ### Why use MCP? Traditionally, connecting AI models to different data sources required developers to create custom code for each integration. This fragmented approach increased development time, maintenance burdens, and limited interoperability between AI models and tools. MCP addresses this challenge by providing a standardized protocol that simplifies integration, accelerates development, and enhances the capabilities of AI assistants. ### What is Neon MCP server? **Neon MCP Server** acts as the **Server** in the MCP architecture, specifically designed for Neon. It provides a set of **tools** that MCP clients (like Claude Desktop, Claude Code, Cursor) can utilize to manage Neon resources. This includes actions for project management, branch management, executing SQL queries, and handling database migrations, all driven by natural language requests. **Key Benefits of using Neon MCP Server:** - **Natural language interaction:** Manage Neon databases using intuitive, conversational commands. - **Simplified database management:** Perform complex actions without writing SQL or directly using the Neon API. - **Enhanced Productivity:** Streamline workflows for database administration and development. - **Accessibility for non-developers:** Empower users with varying technical backgrounds to interact with Neon databases. - **Database migration support:** Leverage Neon's branching capabilities for database schema changes initiated via natural language. **Important** Security Considerations: The Neon MCP server grants powerful database management capabilities through natural language requests. **Always review and authorize actions** requested by the LLM before execution. Ensure that only authorized users and applications have access to the Neon MCP server and Neon API keys. ## Setup options You can set up the Neon MCP Server in two ways: ### Remote hosted server (preview) You can use Neon's managed MCP server, available at `https://mcp.neon.tech`. This is the **easiest** way to start using the Neon MCP Server. It streamlines the setup process by utilizing OAuth for authentication, eliminating the need to manage Neon API keys directly in your client configuration. **Note**: The remote hosted MCP server is currently in its preview phase. As the [OAuth specification for MCP](https://spec.modelcontextprotocol.io/specification/2025-03-26/basic/authorization/) is still quite new, we are releasing it in this preview state. During the initial weeks, you may experience some adjustments to the setup. However, the instructions provided should be straightforward to follow at this time. #### Prerequisites: - An MCP Client application (e.g., Cursor, Windsurf, Claude Desktop, Claude Code, Cline, Zed, ChatGPT). - A Neon account. **Tip** Install in a single click for Cursor users: Click the button below to install the Neon MCP server in Cursor. When prompted, click **Install** within Cursor. Add Neon MCP server to Cursor #### Setup steps: 1. Go to your MCP Client's settings where you configure MCP Servers (this varies by client) 2. Register a new MCP Server. Add a configuration block for "Neon" under 'mcpServers' key. The configuration should look like this: ```json { "mcpServers": { "Neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/mcp"] } } } ``` This command uses `npx` to run a [small helper (`mcp-remote`)](https://github.com/geelen/mcp-remote) that connects to Neon's hosted server endpoint (`https://mcp.neon.tech/mcp`). MCP supports two remote server transports: the deprecated Server-Sent Events (SSE) and the newer, recommended Streamable HTTP. If your LLM client doesn't support Streamable HTTP yet, you can switch the endpoint from `https://mcp.neon.tech/mcp` to `https://mcp.neon.tech/sse` to use SSE instead. 3. Save the configuration and **restart or refresh** your MCP client application. 4. The first time the client initializes Neon's MCP server, it should trigger an **OAuth flow**: - Your browser will open a Neon page asking you to authorize the "Neon MCP Server" to access your Neon account. - Review the requested permissions and click **Authorize**. - You should see a success message, and you can close the browser tab. 5. Your MCP client should now be connected to the Neon Remote MCP Server and ready to use. ### Local MCP Server You can install Neon MCP server locally using `npm`. #### Prerequisites - **Node.js (>= v18.0.0):** Ensure Node.js version 18 or higher is installed on your system. You can download it from [nodejs.org](https://nodejs.org/). - **Neon API Key:** You will need a Neon API key to authenticate the Neon MCP Server with your Neon account. You can create one from the [Neon Console](https://console.neon.tech/app/settings/api-keys) under your Profile settings. Refer to the [Neon documentation on API Keys](https://neon.com/docs/manage/api-keys#creating-api-keys) for detailed instructions. Open your MCP client application and navigate to the settings where you can configure MCP servers. The location of these settings may vary depending on your client. Add a configuration block for "Neon" under the `mcpServers` key. Your configuration should look like this: ```json { "mcpServers": { "neon": { "command": "npx", "args": ["-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` **Note**: If you are using Windows and encounter issues while adding the MCP server, you might need to use the Command Prompt (`cmd`) or Windows Subsystem for Linux (`wsl`) to run the necessary commands. Your configuration setup may resemble the following: Tab: Windows ```json { "mcpServers": { "neon": { "command": "cmd", "args": ["/c", "npx", "-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` Tab: Windows (WSL) ```json { "mcpServers": { "neon": { "command": "wsl", "args": ["npx", "-y", "@neondatabase/mcp-server-neon", "start", ""] } } } ``` ### Troubleshooting If your client does not use `JSON` for configuration of MCP servers (such as older versions of Cursor), you can use the following command when prompted: ```bash npx -y @neondatabase/mcp-server-neon start ``` ## Supported actions (tools) The Neon MCP Server provides the following actions, which are exposed as "tools" to MCP clients. You can use these tools to interact with your Neon projects and databases using natural language commands. **Project management:** - `list_projects`: Retrieves a list of your Neon projects, providing a summary of each project associated with your Neon account. Supports a search parameter and limiting the number of projects returned (default: 10). - `list_shared_projects`: Retrieves a list of Neon projects shared with the current user. Supports a search parameter and limiting the number of projects returned (default: 10). - `describe_project`: Fetches detailed information about a specific Neon project, including its ID, name, and associated branches and databases. - `create_project`: Creates a new Neon project in your Neon account. A project acts as a container for branches, databases, roles, and computes. - `delete_project`: Deletes an existing Neon project and all its associated resources. - `list_organizations`: Lists all organizations that the current user has access to. Optionally filter by organization name or ID using the search parameter. **Branch management:** - `compare_database_schema`: Shows a schema diff between a child branch and its parent. - `create_branch`: Creates a new branch within a specified Neon project. Leverages [Neon's branching](https://neon.com/docs/introduction/branching) feature for development, testing, or migrations. - `delete_branch`: Deletes an existing branch from a Neon project. - `describe_branch`: Retrieves details about a specific branch, such as its name, ID, and parent branch. - `list_branch_computes`: Lists compute endpoints for a project or specific branch, including compute ID, type, size, and autoscaling information. - `reset_from_parent`: Resets the current branch to its parent's state, discarding local changes. Automatically preserves to backup if branch has children, or optionally on request. **SQL query execution:** - `get_connection_string`: Returns your database connection string. - `run_sql`: Executes a single SQL query against a specified Neon database. Supports both read and write operations. - `run_sql_transaction`: Executes a series of SQL queries within a single transaction against a Neon database. - `get_database_tables`: Lists all tables within a specified Neon database. - `describe_table_schema`: Retrieves the schema definition of a specific table, detailing columns, data types, and constraints. - `list_slow_queries`: Identifies performance bottlenecks by finding the slowest queries in a database. Requires the pg_stat_statements extension. **Database migrations (schema changes):** - `prepare_database_migration`: Initiates a database migration process. Critically, it creates a temporary branch to apply and test the migration safely before affecting the main branch. - `complete_database_migration`: Finalizes and applies a prepared database migration to the main branch. This action merges changes from the temporary migration branch and cleans up temporary resources. **Query performance tuning:** - `explain_sql_statement`: Analyzes a SQL query and returns detailed execution plan information to help understand query performance. - `prepare_query_tuning`: Identifies potential performance issues in a SQL query and suggests optimizations. Creates a temporary branch for testing improvements. - `complete_query_tuning`: Finalizes and applies query optimizations after testing. Merges changes from the temporary tuning branch to the main branch. **Neon Auth:** - `provision_neon_auth`: Provisions Neon Auth for a Neon project. Sets up authentication infrastructure by creating an integration with Stack Auth (`@stackframe/stack`). ## Usage examples After setting up either the remote or local server and connecting your MCP client, you can start interacting with your Neon databases using natural language. **Example interactions** - **Search resources:** `"Can you search for 'production' across my Neon resources?"` - **List projects:** `"List my Neon projects"` - **Create a new project:** `"Create a Neon project named 'my-test-project'"` - **List tables in a database:** `"What tables are in the database 'my-database' in project 'my-project'?"` - **Add a column to a table:** `"Add a column 'email' of type VARCHAR to the 'users' table in database 'main' of project 'my-project'"` - **Run a query:** `"Show me the first 10 rows from the 'users' table in database 'my-database'"` - **Generate a schema diff:** `"Generate a schema diff for branch 'br-feature-auth' in project 'my-project'"` ## API key-based authentication The Neon MCP Server supports API key-based authentication for remote access, in addition to OAuth. This allows for simpler authentication using your [Neon API key (personal or organization)](https://neon.com/docs/manage/api-keys) for programmatic access. API key configuration is shown below: ```json { "mcpServers": { "Neon": { "url": "https://mcp.neon.tech/mcp", "headers": { "Authorization": "Bearer <$NEON_API_KEY>" } } } } ``` > Currently, only [streamable HTTP](https://neon.com/docs/ai/neon-mcp-server#streamable-http-support) responses are supported with API-key based authentication. Server-Sent Events (SSE) responses are not yet supported for this authentication method. ## Streamable HTTP support The Neon MCP Server supports streamable HTTP as an alternative to Server-Sent Events (SSE) for streaming responses. This makes it easier to consume streamed data in environments where SSE is not ideal — such as CLI tools, backend services, or AI agents. To use streamable HTTP, make sure to use the latest remote MCP server, and specify the `https://mcp.neon.tech/mcp` endpoint, as shown in the following configuration example: ```json { "mcpServers": { "neon": { "command": "npx", "args": ["-y", "mcp-remote@latest", "https://mcp.neon.tech/mcp"] } } } ``` ## Search across resources The Neon MCP Server includes a `search` tool that lets you find resources across all your Neon organizations, projects, and branches with a single query. Ask your AI assistant: ``` Can you search for "production" across my Neon resources? ``` The assistant will search through all accessible resources and return structured results with direct links to the Neon Console. Results include the resource name, type (organization, project, or branch), and Console URL for easy navigation. A companion `fetch` tool lets you retrieve detailed information about any resource using the ID returned by the search. This is particularly useful when working with multiple organizations or large numbers of projects, making it easier to discover and navigate your Neon resources. ## Read-only mode The Neon MCP Server supports read-only mode for safe operation in cloud and production environments. Enable it by adding the `x-read-only: true` header to your MCP configuration: ```json { "mcpServers": { "Neon": { "url": "https://mcp.neon.tech/mcp", "headers": { "x-read-only": "true" } } } } ``` When enabled, the server restricts all operations to read-only tools. Only list and describe tools are available, and SQL queries automatically run in read-only transactions. This provides a safe method for querying and analyzing production databases without any risk of accidental modifications. ## Guided onboarding The Neon MCP Server includes a `load_resource` tool that provides comprehensive getting-started guidance directly through your AI assistant. Ask your assistant: ``` Get started with Neon ``` The assistant will load detailed step-by-step instructions covering organization setup, project configuration, connection strings, schema creation, and migrations. This works in IDEs that don't fully support MCP resources and ensures onboarding guidance is explicitly loaded when you need it. ## MCP security guidance The Neon MCP server provides access to powerful tools for interacting with your Neon database—such as `run_sql`, `create_table`, `update_row`, and `delete_table`. MCP tools are useful in development and testing, but **we do not recommend using MCP tools in production environments**. ### Recommended usage - Use MCP only for **local development** or **IDE-based workflows**. - Never connect MCP agents to production databases. - Avoid exposing production data or PII data to MCP — only use anonymized data. - Disable MCP tools capable of accessing or modifying data when they are not being used. - Only grant MCP access to trusted users. ### Human oversight and access control - **Always review and authorize actions** requested by the LLM before execution. The MCP server grants powerful database management capabilities through natural language requests, so human oversight is essential. - **Restrict access** to ensure that only authorized users and applications have access to the Neon MCP Server and associated API keys. - **Monitor usage** and regularly audit who has access to your MCP server configurations and Neon API keys. By following these guidelines, you reduce the risk of accidental or unauthorized actions when working with Neon's MCP Server. ## Conclusion The Neon MCP Server enables natural language interaction with Neon Postgres databases, offering a simplified way to perform database management tasks. You can perform actions such as creating new Neon projects and databases, managing branches, executing SQL queries, and making schema changes, all through conversational requests. Features like branch-based migrations contribute to safer schema modifications. By connecting your preferred MCP client to the Neon MCP Server, you can streamline database administration and development workflows, making it easier for users with varying technical backgrounds to interact with Neon databases. ## Resources - [MCP Protocol](https://modelcontextprotocol.org) - [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) - [Neon API Keys](https://neon.com/docs/manage/api-keys#creating-api-keys) - [Neon MCP server GitHub](https://github.com/neondatabase/mcp-server-neon) --- # Source: https://neon.com/llms/ai-semantic-kernel.txt # Semantic Kernel > The Semantic Kernel documentation for Neon outlines the integration of AI capabilities into applications, enabling users to build and deploy AI models efficiently within the Neon platform. ## Source - [Semantic Kernel HTML](https://neon.com/docs/ai/semantic-kernel): The original HTML version of this documentation [Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/overview/) is an open-source SDK developed by Microsoft that enables the integration of large language models (LLMs) with traditional programming constructs. It allows developers to build AI-powered applications by combining natural language processing, planning, and memory capabilities. Semantic Kernel supports orchestration of AI workflows, plugin-based extensibility, and vector-based memory storage for retrieval-augmented generation (RAG) use cases. It is commonly used to create intelligent agents, chatbots, and automation tools that leverage LLMs like OpenAI's GPT models. ## Initialize Postgres Vector Store Semantic Kernel supports using Neon as a vector store, using its the `pgvector` extension and existing [Postgres Vector Store connector](https://learn.microsoft.com/en-us/semantic-kernel/concepts/vector-store-connectors/out-of-the-box-connectors/postgres-connector?pivots=programming-language-csharp) to access and manage data in Neon. It establishes a Neon connection, enables vector support, and initializes a vector store for AI-driven search and retrieval tasks Here's how you can initialize Postgres Vector Store with Semantic Kernel in .NET using `Microsoft.SemanticKernel.Connectors.Postgres` NuGet package: ```csharp // File: Program.cs using Microsoft.SemanticKernel.Connectors.Postgres; using Npgsql; class Program { static void Main() { var connectionString = "Host=myhost;Username=myuser;Password=mypass;Database=mydb"; var dataSourceBuilder = new NpgsqlDataSourceBuilder(connectionString); dataSourceBuilder.UseVector(); using var dataSource = dataSourceBuilder.Build(); var vectorStore = new PostgresVectorStore(dataSource); Console.WriteLine("Vector store created successfully."); } } ``` ## Generate Embeddings with Azure OpenAI You can generate text embeddings using Azure OpenAI in the same .NET application. ```csharp // File: Program.cs using Microsoft.SemanticKernel.Connectors.Postgres; using Microsoft.SemanticKernel.Connectors.AzureOpenAI; using Npgsql; using System; using System.Threading.Tasks; class Program { static async Task Main() { string connectionString = "Host=myhost;Username=myuser;Password=mypass;Database=mydb"; // Create and configure the vector store var dataSourceBuilder = new NpgsqlDataSourceBuilder(connectionString); dataSourceBuilder.UseVector(); using var dataSource = dataSourceBuilder.Build(); var vectorStore = new PostgresVectorStore(dataSource); Console.WriteLine("Vector store created successfully."); // Generate embeddings using Azure OpenAI var embeddingService = new AzureOpenAITextEmbeddingGenerationService( deploymentName: "your-deployment-name", endpoint: "https://api.openai.com", apiKey: "your-api-key" ); string text = "This is an example sentence for embedding."; var embedding = await embeddingService.GenerateEmbeddingsAsync(new[] { text }); Console.WriteLine($"Generated Embedding: [{string.Join(", ", embedding[0].AsReadOnlySpan().Slice(0, 5))}...]"); } } ``` ## Chat Completions with Azure OpenAI Here is how you can run a chat completion query with Azure OpenAI and Semantic Kernel ```csharp // File: Program.cs using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Connectors.Postgres; using Microsoft.SemanticKernel.Connectors.AzureOpenAI; using Npgsql; using System; using System.Threading.Tasks; class Program { static async Task Main() { string connectionString = "Host=myhost;Username=myuser;Password=mypass;Database=mydb"; // Step 1: Create and configure the vector store var dataSourceBuilder = new NpgsqlDataSourceBuilder(connectionString); dataSourceBuilder.UseVector(); using var dataSource = dataSourceBuilder.Build(); var vectorStore = new PostgresVectorStore(dataSource); Console.WriteLine("✅ Vector store created successfully."); // Step 2: Generate embeddings using Azure OpenAI var embeddingService = new AzureOpenAITextEmbeddingGenerationService( deploymentName: "your-deployment-name", endpoint: "https://api.openai.com", apiKey: "your-api-key" ); string text = "This is an example sentence for embedding."; var embedding = await embeddingService.GenerateEmbeddingsAsync(new[] { text }); Console.WriteLine($"✅ Generated Embedding: [{string.Join(", ", embedding[0].AsReadOnlySpan().Slice(0, 5))}...]"); // Step 3: Perform chat completion using Azure OpenAI var kernel = Kernel.CreateBuilder() .AddAzureOpenAIChatCompletion( deploymentName: "your-chat-deployment-name", endpoint: "https://api.openai.com", apiKey: "your-api-key" ).Build(); string userPrompt = "Explain Retrieval-Augmented Generation (RAG) in simple terms."; var response = await kernel.InvokePromptAsync(userPrompt); Console.WriteLine("✅ Chat Completion Response:"); Console.WriteLine(response); } } ``` ## Examples Explore examples and sample code for using SemanticKernel with Neon Serverless Postgres. - [RAG .NET console app (Azure OpenAI + Semantic Kernel)](https://github.com/neondatabase-labs/neon-semantic-kernel-examples): A .NET RAG example app built with Azure OpenAI and Semantic Kernel --- # Source: https://neon.com/llms/azure-azure-deploy.txt # Deploy Neon on Azure > The document outlines the steps to deploy Neon on Microsoft Azure, detailing the configuration and setup process specific to Neon's requirements on the Azure platform. ## Source - [Deploy Neon on Azure HTML](https://neon.com/docs/azure/azure-deploy): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Transfer your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. What you will learn: - How to deploy on Azure as a native service - About creating Neon projects in Azure regions without the integration Related resources: - [Neon on Azure](https://neon.com/docs/manage/azure) - [Developing with Neon on Azure](https://neon.com/docs/azure/azure-develop) This guide steps you through deploying Neon as an Azure native service. **Note**: You can also create Neon projects in Azure regions without the native service integration—to learn more, see [Create Neon projects in Azure regions without the integration](https://neon.com/docs/azure/azure-deploy#create-neon-projects-in-azure-regions-without-the-integration). ## Prerequisites - **Azure account**: If you don't have an active Azure subscription, [create a free account](https://azure.microsoft.com/free). - **Access level**: Only users with **Owner** or **Contributor** access roles on the Azure subscription can set up the integration. Ensure you have the appropriate access before proceeding. For information about assigning roles in Azure, see [Steps to assign an Azure role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps). ## Find Neon on Azure and subscribe 1. Use the search bar at the top of the [Azure portal](https://portal.azure.com/) to find the **Neon Serverless Postgres** offering. Alternatively, go to the [Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) and search for **Neon Serverless Postgres**. 2. Subscribe to the service. You will be directed to the [Create a Neon Serverless Postgres Resource](https://neon.com/docs/azure/azure-deploy#create-a-neon-resource) page. ## Create a Neon Resource 1. On the **Create a Neon Serverless Postgres Resource** page, enter values for the properties described below. | Property | Description | | :------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Subscription** | From the drop-down, select an Azure subscription where you have Owner or Contributor access. | | **Resource group** | Select an existing Azure resource group or create a new one. A resource group is like a container or a folder used to organize and manage resources in Azure. For more information, see [Azure Resource Group overview](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview). | | **Resource Name** | Enter a name for the Azure resource representing your Neon organization. This name is used only in Azure. | | **Region** | Select a region to deploy your Azure resource. This is the region for your Azure resource, not for your Neon projects and data. You will select from [Azure-supported regions](https://neon.com/docs/introduction/regions#azure-regions) when you create the Neon project in a later step. For example, you can create a Neon resource in the (US) West US 3 region and create a Neon project (Europe) Germany West Central. | | **Neon Organization name** | Provide a name for your [Neon Organization](https://neon.com/docs/reference/glossary#organization), such as a team name or company name. The name you specify will be your [Organization](https://neon.com/docs/reference/glossary#organization) name in the Neon Console. Your Neon projects will reside in this named organization. | | **Plan** | Select a plan. You have three to choose from: **Free**, **Scale**, and **Business**. Select **Change Plan** to view details about each plan. For more information about Neon's plans, please refer to the [Neon Pricing](https://neon.com/pricing) page. The Neon **Launch Plan** is currently not available in the Azure Marketplace. | | **Billing term** | Select a billing term for the selected plan. You can choose from a **1-Month** or a **1-Year** billing term (monthly or yearly billing). | 1. Review your **Price + Payment options** and **Subtotal**, select **Next**. 1. On the **Project** page, enter a name for your Neon project, select a Postgres version, specify a name for your database, and choose a region. We recommend selecting the region closest to your application. **Note**: A Neon organization created via the Azure portal supports creating Neon projects in [Azure regions](https://neon.com/docs/introduction/regions#azure-regions) only. Neon's AWS regions are not supported with Neon on Azure. 1. Click **Next**. 1. Optionally specify tags for your resource, then click **Next**. For more about tags, see [Use tags to organize your Azure resources](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources). 1. On the **Review + Create** page, review your selections, the [Azure Marketplace Terms](https://learn.microsoft.com/en-us/legal/marketplace/marketplace-terms), and the Neon [Terms of Use](https://neon.com/terms-of-service) and [Privacy Policy](https://neon.com/privacy-policy). 1. Select **Create** to initiate the resource deployment, which may take a few moments. 1. When your deployment is complete, click the **Go to resource** button under **Next steps** to view your new Neon resource. 1. Select the **Go to Neon** link under **Getting started** to open the Neon Console. You will be directed to the Neon Console where you can start working with your newly created Neon organization and project. 1. From here, follow the [Neon Getting Started](https://neon.com/docs/get-started/signing-up) guide to begin working with your Neon project and get familiar with the platform. ## Create Neon projects in Azure regions without the integration If you want to deploy a Neon project to an Azure region without using the **Azure Native ISV Service** integration, you can simply select one of our supported Azure regions when creating a Neon project from the Neon Console. If you do not use the Azure integration, there is no difference from a Neon project created in an AWS region—your Neon project simply resides in an Azure region instead of AWS region. You might consider this option if: - Part of your infrastructure runs on Azure but you don't need the native integration - An Azure region is closer to your application than a Neon AWS region - You want to manage billing through Neon rather than Azure Creating a Neon project in an Azure region without using the **Azure Native ISV Service** is supported via the Neon Console, CLI, and API. Tab: Console To create a Neon project from the console, follow the [Create project](https://neon.com/docs/manage/projects#create-a-project) steps. Select **Azure** as the **Cloud Service Provider**, and choose one of the available [Azure regions](https://neon.com/docs/introduction/regions). Tab: CLI To create a Neon project using the Neon CLI, use the [neon projects create](https://neon.com/docs/reference/cli-projects#create) command: ```bash neon projects create --name mynewproject --region-id azure-eastus2 ``` For Azure `region-id` values, see [Regions](https://neon.com/docs/introduction/regions). Tab: API To create a Neon project using the Neon API, use the [Create project](https://api-docs.neon.tech/reference/createproject) endpoint: ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "pg_version": 16, "region_id": "azure-eastus2" } } ' ``` For Azure `region_id` values, see [Regions](https://neon.com/docs/introduction/regions). --- # Source: https://neon.com/llms/azure-azure-develop.txt # Develop with Neon on Azure > The document "Develop with Neon on Azure" outlines the steps and configurations required to integrate and develop applications using Neon on the Azure cloud platform. ## Source - [Develop with Neon on Azure HTML](https://neon.com/docs/azure/azure-develop): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Transfer your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. What you will find on this page: - Getting started resources - How to connect - Azure CLIs and SDKs - Azure-focussed guides, sample apps, and tutorials Related resources: - [Neon on Azure](https://neon.com/docs/manage/azure) - [Deploy Neon on Azure](https://neon.com/docs/azure/azure-deploy) ## Azure CLIs and SDKs for Neon Azure provides an Azure-native CLI and SDKs for working with the Neon platform. In addition, the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), [CLI](https://neon.com/docs/reference/neon-cli), and [SDKs](https://neon.com/docs/reference/sdk) are also available to you. - [Azure CLI — az neon](https://learn.microsoft.com/en-us/cli/azure/neon?view=azure-cli-latest): Manage your Neon Resource with the Azure CLI - [Azure SDK for Java](https://learn.microsoft.com/en-us/java/api/overview/azure/neonpostgres?view=azure-java-preview): Manage your Neon Resource with the Azure SDK for Java - [Azure SDK for JavaScript](https://learn.microsoft.com/en-us/javascript/api/overview/azure/neonpostgres?view=azure-node-preview): Manage your Neon Resource with the Azure SDK for JavaScript - [Azure SDK for .NET](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/neonpostgres?view=azure-dotnet-preview): Manage your Neon Resource with the Azure SDK for .NET - [Azure SDK for Python](https://learn.microsoft.com/en-us/python/api/overview/azure/neonpostgres?view=azure-python-preview): Manage your Neon Resource with the Azure SDK for Python - [Powershell](https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.powershell.cmdlets.neonpostgres?view=az-ps-latest): Manage your Neon Resource with Powershell ## Getting started - [Get started with Neon Serverless Postgres on Azure](https://neon.com/guides/neon-azure-integration) — A step-by-step guide to deploying Neon's serverless Postgres via the Azure Marketplace - [Familiarize yourself with Neon](https://neon.com/docs/get-started/signing-up) — Get to know the Neon platform and features by stepping through our official getting started guides ## Migrate data to Neon on Azure - [Migrating data to Neon](https://neon.com/docs/import/migrate-intro) — learn how to move data to Neon from various sources using different migration tools and methods ## Connect to Neon - [Connecting to Neon](https://neon.com/docs/connect/connect-intro) – Learn about connecting to a Neon database - [Connect with Azure Service Connector](https://neon.com/guides/azure-service-connector) - Connect Azure services to Neon - [Integrate Neon Serverless Postgres with Service Connector](https://learn.microsoft.com/en-us/azure/service-connector/how-to-integrate-neon-postgres?tabs=dotnet) — Azure documentation for Service Connector ## AI - [Multitenant RAG with Neon on Azure](https://neon.com/blog/multitenant-private-ai-chat-with-neon-on-azure) - Build a tenant AI chat solution with Neon on Azure - [Azure AI Language with Neon](https://neon.com/guides/azure-ai-language) - Analyze customer feedback using Azure AI Language and store results in Neon - [Building an AI chatbot with Neon](https://neon.com/guides/azure-ai-chatbot) - Create AI-powered chatbots with Neon and Azure - [Azure AI Search with Neon](https://neon.com/guides/azure-ai-search) - Implement search functionality using Azure AI Search and Neon - [AI-powered email assistant in Azure](https://neon.com/blog/how-to-create-your-personal-ai-powered-email-assistant-in-azure) - Create a personal AI email assistant in Azure - [SQL query assistant with .NET and Azure OpenAI](https://neon.com/blog/building-sql-query-assistant-with-dotnet-azure-functions-openai) - Build an intelligent SQL query assistant with Neon, .NET, and Azure OpenAI - [Generative feedback loops with Azure OpenAI](https://neon.com/blog/generative-feedback-loops-with-neon-serverless-postgres-azure-functions-and-azure-openai) - Create generative feedback loops with Neon, Azure Functions, and Azure OpenAI - [Build your first AI agent for Postgres on Azure](https://neon.com/guides/azure-ai-agent-service) — Build an AI agent for Postgres using Azure AI Agent Service - [Semantic Kernel](https://neon.com/docs/ai/semantic-kernel) — Build AI RAG and agentic workflows with Semantic Kernel and Neon ### Live AI demos - [Multi-user RAG in Azure](https://multiuser-rag-g0e0g3h6ekhtf7cg.germanywestcentral-01.azurewebsites.net/): Creates a Neon project with a vector storage per user—each user's data is completely isolated - [AI-Powered Neon Database Q&A Chatbot in Azure](https://rag-vrjtpx5tgrsnm-ca.wittyriver-637b2279.eastus2.azurecontainerapps.io/): Ask questions about data in a Neon database using React and FastAPI in Python ## Frameworks & languages - [.NET with Neon](https://neon.com/docs/guides/dotnet-npgsql) - Connect a .NET (C#) application to Neon - [Entity Framework with Neon](https://neon.com/docs/guides/dotnet-entity-framework) - Connect Entity Framework applications to Neon - [Entity Framework schema migrations](https://neon.com/docs/guides/entity-migrations) - Schema migration with Neon and Entity Framework - [Azure DevOps Entity Framework migrations](https://neon.com/guides/azure-devops-entity-migrations) - Manage Entity Framework migrations with Azure DevOps - [ASP.NET with Neon and Entity Framework](https://neon.com/guides/dotnet-neon-entity-framework) - Build ASP.NET Core applications with Neon and Entity Framework - [RESTful API with ASP.NET Core and Swagger](https://neon.com/guides/aspnet-core-api-neon) - Build APIs with ASP.NET Core, Swagger, and Neon - [Neon read replicas with Entity Framework](https://neon.com/guides/read-replica-entity-framework) - Scale .NET applications with Entity Framework and Neon read replicas ### Online course - [ASP.NET Core Development with Neon Serverless Postgres & Azure](https://www.udemy.com/course/aspnet-core-development-with-neon-postgresql-azure): Build a Full-Stack CRM with ASP.NET Core, EF Core, PostgreSQL, and Deploy to Azure Cloud Step-by-Step ## Functions & serverless - [Query Postgres from Azure Functions](https://neon.com/guides/query-postgres-azure-functions) - Connect from Azure Functions to Neon - [Building a serverless referral system](https://neon.com/guides/azure-functions-referral-system) - Create a referral system with Neon and Azure Functions - [Building a robust JSON API with TypeScript](https://neon.com/guides/azure-functions-hono-api) - Build APIs with TypeScript, Postgres, and Azure Functions - [Azure Static Web Apps with Neon](https://neon.com/guides/azure-todo-static-web-app) - Building Azure Static Web Apps with Neon - [Azure Logic Apps with Neon](https://neon.com/guides/azure-logic-apps) — Integrate Neon with Azure Logic Apps ## Security & access control - [Row-level security with Azure AD](https://neon.com/docs/guides/neon-rls-azure-ad) - Implement row-level security with Azure Active Directory --- # Source: https://neon.com/llms/azure-azure-manage.txt # Manage Neon on Azure > The document outlines the procedures for managing Neon databases on Microsoft Azure, detailing steps for setup, configuration, and maintenance within the Azure environment. ## Source - [Manage Neon on Azure HTML](https://neon.com/docs/azure/azure-manage): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Transfer your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. What you will learn: - How to create additional Neon projects on Azure - How to transfer Neon projects to Azure - How to delete a Neon resource on Azure Related resources: - [Neon on Azure](https://neon.com/docs/manage/azure) - [Deploying Neon on Azure](https://neon.com/docs/manage/azure-deploy) - [Develop with Neon on Azure](https://neon.com/docs/manage/azure-develop) This topic describes how to manage your Neon resource on Azure. It covers how to create additional Neon projects, how to transfer Neon projects to an Azure-created Neon organization, how to delete a Neon resource, and troubleshooting. ## Create additional Neon projects You can add Neon projects to an existing Neon resource from the **Projects** page in Azure or from the Neon Console. In Azure, navigate to the **Projects** page and select **Create Project**. See [Create a project](https://neon.com/docs/manage/projects#create-a-project) for how to create a project from the Neon Console. ## Create branches A branch is an independent copy of your database that you can use for development or testing. It will not increase storage until you modify data or the branch falls out of your project's [restore window](https://neon.com/docs/manage/projects#configure-your-restore-window). Changes made on a branch do not affect the parent database. To learn more, see [Branching](https://neon.com/docs/introduction/branching). To create branches in the Azure Portal: 1. Navigate to your Neon resource and select the **Projects** page. 1. Select your Neon project. You should see your existing branches. 1. To create a new branch, select **Create branch** to open the **Create new Branch** drawer. 1. Specify a branch name and select a parent branch, and click **Create**. You now have an independent and isolated copy of your parent branch with its own compute resources. The branch page shows the following information for each branch: | **Column** | **Description** | | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | **Branch** | The name of the database branch. | | **Parent** | The parent branch from which the branch was created. | | **Compute hours** | Total [compute hours](https://neon.com/docs/reference/glossary#compute-hours) used by the branch's primary compute. | | **Primary Compute** | The allocated autoscaling range, in [Compute Units (CU)](https://neon.com/docs/reference/glossary#compute-unit-cu), for the branch's primary compute. | | **Data size** | The [logical size](https://neon.com/docs/reference/glossary#logical-data-size) of the data stored in the branch. | To learn about integrating branching into your developer workflow, see our [Database branching workflow primer](https://neon.com/docs/get-started/workflow-primer). You can also create branches in the Neon Console. See [Create a branch](https://neon.com/docs/manage/branches#create-a-branch) for instructions. ## Delete branches Important points about branch deletion: - A branch deletion action cannot be undone. - You cannot delete a branch that has children. You need to delete the child branches first. To delete a branch in the Azure Portal: 1. Navigate to your Neon resource. 1. Select your Neon project. You should see your existing branches. 1. Select the branch you want to delete. 1. Select **Delete Neon branch** to open the **Delete Branch** drawer. You'll need to type the branch name to enable the **Delete** button. 1. Click **Delete** and confirm. You can also delete branches in the Neon Console. For instructions, see [Delete a branch](https://neon.com/docs/manage/branches#delete-a-branch). ## Connect to a database You can connect to your Neon database using a Postgres database connection URL. To retrieve a connection URL for your Neon database: 1. Navigate to your Neon resource. 1. Select **Settings** > **Connect**. 1. On the **Connect** page, use the drop-down menus to select a Neon **Project**, **Branch**, **Database**, **Role**, and **Compute**. The values you select define the connection string for your database: | Value | Description | | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Project** | The [Neon project](https://neon.com/docs/reference/glossary#project) you want to connect to. A Neon project includes databases and branches. | | **Branch** | A [branch](https://neon.com/docs/reference/glossary#branch) within your Neon project where your database resides. | | **Database** | The name of the [Postgres database](https://neon.com/docs/reference/glossary#database) you want to connect to. | | **Role** | The [Postgres role](https://neon.com/docs/reference/glossary#postgres-role) (user) you want to connect with. | | **Compute** | The compute that runs Postgres. Usually "Primary"—this is the read-write compute for the branch, but you may also have [read replica](https://neon.com/docs/reference/glossary#read-replica) computes. | You can toggle the **Connection pooling** option to use a pooled connection string, which supports up to 10,000 concurrent connections. A pooled connection string is recommended for most use cases. Use a direct connection for `pg_dump`, session-dependent features, or schema migrations. For more about pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). For more about connecting to your Neon database, see [Connect from any app](https://neon.com/docs/connect/connect-from-any-app). ## Transfer projects to an Azure-created Neon organization You can transfer existing Neon projects to an Azure-created organization, but note these restrictions: - The Neon project you are transferring must be in an [Azure region](https://neon.com/docs/introduction/regions#azure-regions). Azure-created Neon organizations do not support projects created in [AWS regions](https://neon.com/docs/introduction/regions#aws-regions). - The billing plan of the Azure-managed organization must match or exceed the billing plan of the organization you are transferring projects from. For example, attempting to transfer projects from a Neon paid plan organization to a Free plan Azure-managed organization will result in an error. For detailed transfer steps, see [Transfer projects to an organization](https://neon.com/docs/manage/orgs-project-transfer). If the restrictions above prevent you from transferring your project, consider these options: - Open a [support ticket](https://console.neon.tech/app/projects?modal=support) for assistance with transferring your Neon project (supported only for projects that reside in [Azure regions](https://neon.com/docs/introduction/regions#azure-regions)). If you're on the Neon Free plan and can't open a support ticket, you can email Neon support at `help@databricks.com`. - Create a new Neon project in an Azure-managed organization and migrate your database using one of these options: - [Neon Import Data Assistant](https://neon.com/docs/import/import-data-assistant) - [pg_dump and pg_restore](https://neon.com/docs/import/migrate-from-postgres#run-a-test-migration) ## Delete a Neon Resource in Azure If you no longer need your Neon resource, you can delete it to stop all associated billing through the Azure Marketplace. **Important**: Deleting a Neon resource from Azure removes the Neon Organization and all Neon projects and data associated with that resource. Follow these steps to delete the resource: 1. In the Azure portal, select the Neon resource you want to delete. 2. On the **Overview** page, select **Delete**. 3. Confirm the deletion by entering the resource's name. 4. Choose the reason for deleting the resource. 5. Select **Delete** to finalize. Once the resource is deleted, billing will stop immediately, and the Neon Organization and all Neon projects and data associated with that resource will be removed. ## Troubleshoot If you encounter issues, check the documentation in Azure's troubleshooting guide for Neon: [Troubleshoot Neon Serverless Postgres](https://learn.microsoft.com/en-us/azure/partner-solutions/neon/troubleshoot). If you still need help, contact [Neon Support](https://neon.com/docs/introduction/support). --- # Source: https://neon.com/llms/changelog.txt # Changelog > The document is a changelog detailing updates, improvements, and fixes for Neon, enabling users to track changes and enhancements in the platform's features and functionality. ## Source - [Changelog HTML](https://neon.com/docs/changelog): The original HTML version of this documentation --- # Source: https://neon.com/llms/connect-choose-connection.txt # Choosing your driver and connection type > The document "Choosing your driver and connection type" guides Neon users in selecting the appropriate database driver and connection type for their specific use case, detailing compatibility and configuration options. ## Source - [Choosing your driver and connection type HTML](https://neon.com/docs/connect/choose-connection): The original HTML version of this documentation When setting up your application's connection to your Neon Postgres database, you need to make two main choices: - **The right driver for your deployment** — Neon Serverless driver or a TCP-based driver - **The right connection type for your traffic** — pooled connections or direct connections This flowchart will guide you through these selections. ## Choosing your connection type: flowchart ## Choosing your connection type: drivers and pooling ### Your first choice is which driver to use - **Serverless** If working in a serverless environment and connecting from a JavaScript or TypeScript application, we recommend using the [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver). It handles dynamic workloads with high variability in traffic — for example, Vercel Edge Functions or Cloudflare Workers. - **TCP-based driver** If you're not connecting from a JavaScript or TypeScript application or you are not developing a serverless application, use a traditional TCP-based Postgres driver. For example, if you're using Node.js with a framework like Next.js, you can add the `pg` client to your dependencies, which serves as the Postgres driver for TCP connections. #### HTTP or WebSockets If you are using the serverless driver, you also need to choose whether to query over HTTP or WebSockets: - **HTTP** Querying over an HTTP [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) request is faster for single, non-interactive transactions, also referred to as "one-shot queries". Issuing [multiple queries](https://neon.com/docs/serverless/serverless-driver#issue-multiple-queries-with-the-transaction-function) via a single, non-interactive transaction is also supported. See [Use the driver over HTTP](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-http). - **WebSockets** If you require session or interactive transaction support or compatibility with [node-postgres](https://node-postgres.com/) (the popular **npm** `pg` package), use WebSockets. See [Use the driver over WebSockets](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-websockets). ### Next, choose your connection type: direct or pooled You then need to decide whether to use direct connections or pooled connections (using PgBouncer for Neon-side pooling): - **In general, use pooled connections whenever you can** Pooled connections can efficiently manage high numbers of concurrent client connections, up to 10,000. This 10K ceiling works best for serverless applications and Neon-side connection pools that have many open connections, but infrequent and/or short transactions. - **Use direct (unpooled) connections if you need persistent connections** If your application is focused mainly on tasks like migrations or administrative operations that require stable and long-lived connections, use an unpooled connection. **Note**: PgBouncer can keep many application connections open (up to 10,000) concurrently, but only a certain number of these can be actively querying the Postgres server at any given time. This number is defined by the PgBouncer `default_pool_size` setting. See [Neon PgBouncer configuration settings](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings) for details. For more information on these choices, see: - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) - [Connection pooling](https://neon.com/docs/connect/connection-pooling) ## Common Pitfalls Here are some key points to help you navigate potential issues. | Issue | Description | |----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Double pooling | **Neon-side pooling** uses PgBouncer to manage connections between your application and Postgres. **Client-side pooling** occurs within the client library before connections are passed to PgBouncer. If you're using a pooled Neon connection (supported by PgBouncer), it's best to avoid client-side pooling. Let Neon handle the pooling to prevent retaining unused connections on the client side. If you must use client-side pooling, make sure that connections are released back to the pool promptly to avoid conflicts with PgBouncer. | | Understanding limits | Don't confuse `max_connections` with `default_pool_size`. `max_connections` is the maximum number of concurrent connections allowed by Postgres, determined by your [Neon compute size configuration](https://neon.com/docs/connect/connection-pooling#connection-limits-without-connection-pooling). `default_pool_size` is the maximum number of backend connections or transactions that PgBouncer supports per user/database pair, also determined by compute size Simply increasing your compute to get more `max_connections` may not improve performance if the bottleneck is actually on your `default_pool_size`. To increase your `default_pool_size`, contact [Support](https://neon.com/docs/introduction/support). | | Use request handlers | In serverless environments such as Vercel Edge Functions or Cloudflare Workers, WebSocket connections can't outlive a single request. That means Pool or Client objects must be connected, used and closed within a single request handler. Don't create them outside a request handler; don't create them in one handler and try to reuse them in another; and to avoid exhausting available connections, don't forget to close them. See [Pool and Client](https://github.com/neondatabase/serverless?tab=readme-ov-file#pool-and-client) for details.| ## Configuration ### Installing the Neon Serverless Driver You can install the driver with your preferred JavaScript package manager. For example: ```bash npm install @neondatabase/serverless ``` Find details on configuring the Neon Serverless Driver for querying over HTTP or WebSockets here: - [Use the driver over HTTP](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-http) - [Use the driver over WebSockets](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-websockets) ### Installing traditional TCP-based drivers You can use standard Postgres client libraries or drivers. Neon is fully compatible with Postgres, so any application or utility that works with Postgres should work with Neon. Consult the integration guide for your particular language or framework for the right client for your needs: - [Framework Quickstarts](https://neon.com/docs/get-started/frameworks) - [Language Quickstarts](https://neon.com/docs/get-started/languages) ### Configuring the connection Setting up a direct or pooled connection is usually a matter of choosing the appropriate connection string and adding it to your application's `.env` file. You can get your connection string from the [Neon Console](https://neon.com/docs/connect/connect-from-any-app) or via CLI. For example, to get a pooled connection string via CLI: ```bash neon connection-string --pooled true [branch_name] postgres://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Notice the `-pooler` in the connection string — that's what differentiates a direct connection string from a pooled one. Here's an example of getting a direct connection string from the Neon CLI: ```bash neon connection-string [branch_name] postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` For more details, see [How to use connection pooling](https://neon.com/docs/connect/connection-pooling#how-to-use-connection-pooling). ## Table summarizing your options Here is a table summarizing the options we've walked through on this page: | | Direct Connections | Pooled Connections | Serverless Driver (HTTP) | Serverless Driver (WebSocket) | | --------------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | --------------------------------------------- | | **Use Case** | Migrations, admin tasks requiring stable connections | High number of concurrent client connections, efficient resource management | One-shot queries, short-lived operations | Transactions requiring persistent connections | | **Scalability** | Limited by `max_connections` tied to [compute size](https://neon.com/docs/manage/computes#how-to-size-your-compute) | Up to 10,000 application connections (between your application and PgBouncer); however, only [`default_pool_size`](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings) backend connections (active transactions between PgBouncer and Postgres) are allowed per user/database pair. This limit can be increased upon request. | Automatically scales | Automatically scales | | **Performance** | Low overhead | Efficient for stable, high-concurrency workloads | Optimized for serverless | Optimized for serverless | --- # Source: https://neon.com/llms/connect-connect-from-any-app.txt # Connect from any application > The document outlines the steps for connecting to a Neon database from various applications, detailing configuration requirements and connection string formats specific to Neon's platform. ## Source - [Connect from any application HTML](https://neon.com/docs/connect/connect-from-any-app): The original HTML version of this documentation What you will learn: - Where to find database connections details - Where to find example connection snippets - Protocols supported by Neon Related topics: - [Choosing a driver and connection type](https://neon.com/docs/connect/choose-connection) - [Neon Local Connect](https://neon.com/docs/local/neon-local-connect) - [Connect to Neon securely](https://neon.com/docs/connect/connect-securely) - [Connection pooling](https://neon.com/docs/connect/connection-pooling) - [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor) You can connect to your Neon database from any application. The standard method is to copy your [connection string](https://neon.com/docs/connect/connect-from-any-app#get-a-connection-string-from-the-neon-console) from the Neon console and use it in your app or client. For local development, you can also use the [Neon Local Connect extension](https://neon.com/docs/connect/connect-from-any-app#connect-with-the-neon-local-connect-extension), which lets you connect using a simple localhost connection string. ## Get a connection string from the Neon console When connecting to Neon from an application or client, you connect to a database in your Neon project. In Neon, a database belongs to a branch, which may be the default branch of your project (`production`) or a child branch. You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. This opens the **Connect to your database** modal. Select a branch, a compute, a database, and a role. A connection string is constructed for you. Neon supports both pooled and direct connections to your database. Neon's connection pooler supports a higher number of concurrent connections, so we provide pooled connection details in the **Connect to your database** modal by default, which adds a `-pooler` option to your connection string. If needed, you can get direct database connection details from the modal disabling the **Connection pooling** toggle. For more information about pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling#connection-pooling). A Neon connection string includes the role, password, hostname, and database name. ```text postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ^ ^ ^ ^ ^ role -| | |- hostname |- pooler option |- database | |- password ``` **Note**: The hostname includes the ID of the compute, which has an `ep-` prefix: `ep-cool-darkness-123456`. For more information about Neon connection strings, see [connection string](https://neon.com/docs/reference/glossary#connection-string). You can use the details from the **Connect to your database** modal to configure your database connection. For example, you might place the connection details in an `.env` file, assign the connection string to a variable, or pass the connection string on the command-line. **.env file** ```text PGHOST=ep-cool-darkness-a1b2c3d4-pooler.us-east-2.aws.neon.tech PGDATABASE=dbname PGUSER=alex PGPASSWORD=AbC123dEf PGPORT=5432 ``` **Variable** ```text DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` **Command-line** ```bash psql postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Note**: Neon requires that all connections use SSL/TLS encryption, but you can increase the level of protection by configuring the `sslmode` option. For more information, see [Connect to Neon securely](https://neon.com/docs/connect/connect-securely). ## Connect with the Neon Local Connect extension For local development, you can use the [Neon Local Connect extension](https://neon.com/docs/local/neon-local-connect) to connect to any Neon branch using a simple localhost connection string. Available for VS Code, Cursor, Windsurf, and other VS Code-compatible editors, this extension lets you: - Connect to any branch using `postgres://neon:npg@localhost:5432/` - Switch branches without updating your connection string - Create and manage ephemeral branches directly from your editor - Access the Neon SQL Editor and Table View with one click Your app connects to `localhost:5432` while Neon Local routes traffic to your actual Neon branch in the cloud. This eliminates the need to manage different connection strings for different branches during development. ## Where can I find my password? It's included in your Neon connection string. Click the **Connection** button on your **Project Dashboard** to open the **Connect to your database** modal. ### Save your connection details to 1Password If have a [1Password](https://1password.com/) browser extension, you can save your database connection details to 1Password directly from the Neon Console. In your **Project Dashboard**, click **Connect**, then click **Save in 1Password**. ## What port does Neon use? Neon uses the default Postgres port, `5432`. ## Connection examples The **Connect to your database** modal provides connection examples for different frameworks and languages, constructed for the branch, database, and role that you select. See our [frameworks](https://neon.com/docs/get-started/frameworks) and [languages](https://neon.com/docs/get-started/languages) guides for more connection examples. ## Network protocol support Neon projects provisioned on AWS support both [IPv4](https://en.wikipedia.org/wiki/Internet_Protocol_version_4) and [IPv6](https://en.wikipedia.org/wiki/IPv6) addresses. Neon projects provisioned on Azure support IPv4. Additionally, Neon provides a low-latency serverless driver that supports connections over WebSockets and HTTP. Great for serverless or edge environments where connections over TCP may not be not supported. For further information, refer to our [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) documentation. ## Connection notes - Some older Postgres client libraries and drivers, including older `psql` executables, are built without [Server Name Indication (SNI)](https://neon.com/docs/reference/glossary#sni) support, which means that a connection workaround may be required. For more information, see [Connection errors: The endpoint ID is not specified](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified). - Some Java-based tools that use the pgJDBC driver for connecting to Postgres, such as DBeaver, DataGrip, and CLion, do not support including a role name and password in a database connection string or URL field. When you find that a connection string is not accepted, try entering the database name, role, and password values in the appropriate fields in the tool's connection UI when configuring a connection to Neon. For examples, see [Connect a GUI or IDE](https://neon.com/docs/connect/connect-postgres-gui#connect-to-the-database). - When connecting from BI tools like Metabase, Tableau, or Power BI, we recommend using a **read replica** instead of your main database compute. BI tools often run long or resource-intensive queries, which can impact performance on your primary branch. Read replicas can scale independently and handle these workloads without affecting your main production traffic. To learn more, see [Neon read replicas](https://neon.com/docs/introduction/read-replicas). --- # Source: https://neon.com/llms/connect-connect-intro.txt # Connect to Neon > The "Connect to Neon" documentation outlines the steps and requirements for establishing a connection to a Neon database, detailing supported connection methods and necessary configurations for users. ## Source - [Connect to Neon HTML](https://neon.com/docs/connect/connect-intro): The original HTML version of this documentation Find detailed information and instructions about connecting to Neon from different clients and applications, troubleshooting connection issues, connection pooling, and more. For integrating Neon with different frameworks, languages, and platforms, refer to our [Guides](https://neon.com/docs/guides/guides-intro) documentation. ## Choose a connection type To help understand which driver and connection type you need, see: - [Choose a driver and connection type](https://neon.com/docs/connect/choose-connection): How to select the right driver and connection type for your application ## Connect from clients and applications Learn how to establish a connection to Neon from any application. - [Connect from any app](https://neon.com/docs/connect/connect-from-any-app): Learn about connection strings and how to connect to Neon from any application - [Connect locally](https://neon.com/docs/local/neon-local-connect): Connect to any Neon branch using a localhost connection string in VS Code, Cursor, or Windsurf - [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver): Connect to Neon from serverless environments over HTTP or WebSockets - [Connect a GUI application](https://neon.com/docs/connect/connect-postgres-gui): Learn how to connect to a Neon database from a GUI application - [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor): Connect with psql, the native command-line client for Postgres - [Passwordless auth](https://neon.com/docs/connect/passwordless-connect): Connect without a password using Neon's psql passwordless auth feature ## Connect from frameworks and languages Learn how to connect to Neon from different frameworks and languages. - [Connect from various frameworks](https://neon.com/docs/get-started/frameworks): Find detailed instructions for connecting to Neon from frameworks - [Connect from various languages](https://neon.com/docs/get-started/languages): Find detailed instructions for connecting to Neon from languages ## Troubleshoot connection issues Troubleshoot and resolve common connection issues. - [Connection errors](https://neon.com/docs/connect/connection-errors): Learn how to resolve commonly-encountered connection errors - [Connect latency and timeouts](https://neon.com/docs/connect/connection-latency): Learn about strategies for managing connection latency and timeouts ## Secure connections Ensure the integrity and security of your connections to Neon. - [Connect to Neon securely](https://neon.com/docs/connect/connect-securely): Learn how to connect to Neon securely using SSL/TLS encrypted connections - [Avoid MME attacks in Postgres 16](https://neon.com/blog/avoid-mitm-attacks-with-psql-postgres-16): Learn how the psql client in Postgres 16 makes it simple to connect securely ## Connection pooling Optimize your connections by enabling connection pooling. - [Connection pooling in Neon](https://neon.com/docs/connect/connection-pooling): Learn how to enable connection pooling to support up to 10,000 concurrent connections - [Connection pooling with Prisma](https://neon.com/docs/guides/prisma#connect-from-serverless-functions): Learn about connecting from Prisma to Neon from serverless functions --- # Source: https://neon.com/llms/connect-connect-looker-studio.txt # Connect Looker Studio to Neon > The document outlines the steps required to connect Looker Studio to a Neon database, detailing the configuration process for seamless data integration and visualization. ## Source - [Connect Looker Studio to Neon HTML](https://neon.com/docs/connect/connect-looker-studio): The original HTML version of this documentation [Looker Studio](https://lookerstudio.google.com/) is Google's data visualization and business intelligence platform. This guide explains how to connect your Neon Postgres database to Looker Studio using a PostgreSQL data source. ## Get your database connection string 1. In the Neon Console, select the **project** and **branch** you want to connect to. 2. On the **Project dashboard**, click **Connect**. 3. Click **Show Password** and copy the connection string. For more details, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Add a PostgreSQL data source in Looker Studio 1. In Looker Studio, **Create** > **Data Source**. 2. Search for and select the **PostgreSQL** connector, and authorize it. 3. In the **Basic** section, fill in the fields using the details from your connection string. For example, if your connection string is: ```bash psql 'postgresql://neondb_owner:npg_aaaaaaaaaaaa@ep-quiet-mountain-a1t1firv-pooler.ap-southeast-1.aws.neon.tech/neondb?sslmode=require&channel_binding=require' ``` You would enter: - **Host name or IP**: `ep-quiet-mountain-a1t1firv-pooler.ap-southeast-1.aws.neon.tech` - **Port (optional)**: Leave blank - **Database**: `neondb` - **Username**: `neondb_owner` - **Password**: `npg_aaaaaaaaaaaa` ## Configure SSL settings 1. Ensure **Enable SSL** is checked. 2. Leave **Enable client authentication** unchecked. ## Upload the server certificate 1. Download the `isrgrootx1.pem` file from https://letsencrypt.org/certs/isrgrootx1.pem. For more information about SSL certificates, see [Connect to Neon securely](https://neon.com/docs/connect/connect-securely). 2. In Looker Studio, upload the `isrgrootx1.pem` file using the **Upload** button next to the **Server Certificate** box. ## Authenticate Click **Authenticate** to verify the connection. If successful, you will see your Neon tables listed in Looker Studio. In this example, there is one table listed — the `playing_with_neon` example table. ## Connect Click the **Connect** button in Looker Studio to view table details. --- # Source: https://neon.com/llms/connect-connect-pgcli.txt # Connect with pgcli > The document explains how to connect to a Neon database using pgcli, detailing the necessary steps and configurations for establishing a command-line interface connection. ## Source - [Connect with pgcli HTML](https://neon.com/docs/connect/connect-pgcli): The original HTML version of this documentation The `pgcli` client is an interactive command-line interface for Postgres that offers several advantages over the traditional `psql` client, including syntax highlighting, autocompletion, multi-line editing, and query history. ## Installation For installation instructions, please refer to the `pgcli` [installation documentation](https://www.pgcli.com/install). ## Usage information To view `pgcli` usage information, run the following command: ```bash pgcli --help ``` ## Connect to Neon The easiest way to connect to Neon using the `pgcli` client is with a connection string, which you can obtain by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a role, and the database you want to connect to. A connection string is constructed for you. From your terminal or command prompt, run the `pgcli` client with the connection string. Your command will look something like this: ```bash pgcli postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ## Run queries After establishing a connection, try the `pgcli` client by running the following queries. To test the `pgcli` [autocompletion](https://www.pgcli.com/completion) feature, type the `SELECT` query. ```sql CREATE TABLE my_table AS SELECT now(); SELECT * FROM my_table; ``` The following result is returned: ```sql SELECT 1 +-------------------------------+ | now | |-------------------------------| | 2023-05-21 09:23:18.086163+00 | +-------------------------------+ SELECT 1 Time: 0.116s ``` The `pgcli` [query history](https://www.pgcli.com/history) feature allows you to use the **Up** and **Down** keys on your keyboard to navigate your query history. The `pgcli` client also supports [named queries](https://www.pgcli.com/named_queries.md). To save a query, type: ```bash \ns simple SELECT * FROM my_table; ``` To run a named query, type: ```bash # Run a named query. \n simple > SELECT * FROM my_table +-------------------------------+ | now | |-------------------------------| | 2023-05-21 09:23:18.086163+00 | +-------------------------------+ SELECT 1 Time: 0.051s ``` For more information about `pgcli` features and capabilities, refer to the [pgcli documentation](https://www.pgcli.com/docs). --- # Source: https://neon.com/llms/connect-connect-postgres-gui.txt # Connect a GUI application > The document outlines the steps for connecting a GUI application to a Neon PostgreSQL database, detailing configuration settings and connection parameters specific to Neon. ## Source - [Connect a GUI application HTML](https://neon.com/docs/connect/connect-postgres-gui): The original HTML version of this documentation This topic describes how to connect to a Neon database from a GUI application or IDE. Most GUI applications and IDEs that support connecting to a Postgres database also support connecting to Neon. ## Gather your connection details The following details are typically required when configuring a connection: - hostname - port - database name - role (user) - password You can gather these details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a role, and the database you want to connect to. A connection string is constructed for you. **Note**: Neon supports pooled and direct connections to the database. Use a pooled connection string if your application uses a high number of concurrent connections. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling#connection-pooling). The connection string includes the role, password, hostname, and database name. ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ^ ^ ^ |- |- |- ``` - role name: `alex` - hostname: `ep-cool-darkness-123456.us-east-2.aws.neon.tech` - database name: `dbname` Neon uses the default Postgres port, `5432`. ## Connect to the database In the GUI application or IDE, enter the connection details into the appropriate fields and connect. Some applications permit specifying a connection string while others require entering connection details into separate fields. In the pgAdmin example below, connection details are entered into separate fields, and clicking **Save** establishes the database connection. Some Java-based tools that use the pgJDBC driver for connecting to Postgres, such as DBeaver, DataGrip, and CLion, do not support including a role name and password in a database connection string or URL field. When you find that a connection string is not accepted, try entering the database name, role, and password values in the appropriate fields in the tool's connection UI when configuring a connection to Neon. For example, the DBeaver client has a **URL** field, but connecting to Neon requires specifying the connection details as shown: ## Tested GUI applications and IDEs Connections from the GUI applications and IDEs in the table below have been tested with Neon. **Note**: Some applications require an Server Name Indication (SNI) workaround. Neon uses compute domain names to route incoming connections. However, the Postgres wire protocol does not transfer the server domain name, so Neon relies on the Server Name Indication (SNI) extension of the TLS protocol to do this. Not all application clients support SNI. In these cases, a workaround is required. For more information, see [Connection errors](https://neon.com/docs/connect/connection-errors). | Application or IDE | Notes | | :---------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Appsmith](https://www.appsmith.com/) | | | [AskYourDatabase](https://www.askyourdatabase.com/) | | | [AWS Database Migration Service (DMS)](https://aws.amazon.com/dms/) | Use [SNI workaround D](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field). Use a `$` character as a separator between the `endpoint` option and password. For example: `endpoint=$`. Also, you must set **Secure Socket Layer (SSL) mode** to `require`. See [Migrate with AWS DMS](https://neon.com/docs/import/migrate-aws-dms). | | [Azure Data Studio](https://azure.microsoft.com/en-us/products/data-studio/) | Requires the [PostgreSQL extension](https://learn.microsoft.com/en-us/sql/azure-data-studio/extensions/postgres-extension?view=sql-server-ver16) and [SNI workaround D](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field) | | [Beekeeper Studio](https://www.beekeeperstudio.io/) | Requires the **Enable SSL** option | | [CLion](https://www.jetbrains.com/clion/) | | | [Datagran](https://www.datagran.io/) | Requires [SNI workaround D](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field) connection workaround | | [DataGrip](https://www.jetbrains.com/datagrip/) | | | [DBeaver](https://dbeaver.io/) | | | [dbForge](https://www.devart.com/dbforge/) | | | [DbVisualizer](https://www.dbvis.com/) | | | [DBX](https://getdbx.com/) | | | [DronaHQ hosted cloud version](https://www.dronahq.com/) | Requires selecting **Connect using SSL** when creating a connector | | [Forest Admin](https://www.forestadmin.com/) | The database requires at least one table | | [Grafana](https://grafana.com/docs/grafana/latest/datasources/postgres/) | Requires `sslmode=verify-full`. See [SNI workaround C](https://neon.com/docs/connect/connection-errors#c-set-verify-full-for-golang-based-clients). | | [Google Looker Studio](https://lookerstudio.google.com/) | Requires **Enable SSL** and uploading the PEM-encoded ISRG Root X1 public root certificate issued by Let's Encrypt, which you can find here: [isrgrootx1.pem](https://letsencrypt.org/certs/isrgrootx1.pem). See the [Looker Studio guide](https://neon.com/docs/connect/connect-looker-studio) for detailed connection instructions. | | [Google Cloud Platform (GCP)](https://cloud.google.com/gcp) | May require uploading the PEM-encoded ISRG Root X1 public root certificate issued by Let's Encrypt, which you can find here: [isrgrootx1.pem](https://letsencrypt.org/certs/isrgrootx1.pem). | | [Google Colab](https://colab.research.google.com/) | See [Use Google Colab with Neon](https://neon.com/docs/ai/ai-google-colab). | | [Luna Modeler](https://www.datensen.com/data-modeling/luna-modeler-for-relational-databases.html) | Requires enabling the SSL/TLS option | | [Metabase](https://www.metabase.com/) | | | [Postico](https://eggerapps.at/postico2/) | SNI support since v1.5.21. For older versions, use [SNI workaround B](https://neon.com/docs/connect/connection-errors#b-use-libpq-keyvalue-syntax-in-the-database-field). Postico's [keep-connection-alive mechanism](https://eggerapps.at/postico/docs/v1.2/changelist.html), enabled by default, may prevent your compute from scaling to zero. | | [PostgreSQL VS Code Extension by Chris Kolkman](https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres) | | | [pgAdmin 4](https://www.pgadmin.org/) | | | [Retool](https://retool.com/) | | | [Tableau](https://www.tableau.com/) | Use the PostgreSQL connector with the **Require SSL** option selected | | [TablePlus](https://tableplus.com/) | SNI support on macOS since build 436, and on Windows since build 202. No SNI support on Linux currently. For older versions, use [SNI workaround B](https://neon.com/docs/connect/connection-errors#b-use-libpq-keyvalue-syntax-in-the-database-field). | | [Segment](https://segment.com/) | Requires [SNI workaround D](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field) | | [Skyvia](https://skyvia.com/) | Requires setting the **SSL Mode** option to `Require`, and **SSL TLS Protocol** to 1.2. The other SSL fields are not required for **SSL Mode**: `Require`. | | [Zoho Analytics](https://www.zoho.com/analytics/) | Requires selecting **Other Cloud Services** as the Cloud Service Provider, and the **Connect directly using IP address** and **Use SSL** options when configuring a PostgreSQL connection. | ## Connecting from Business Intelligence (BI) tools When connecting from BI tools like Metabase, Tableau, or Power BI, we recommend using a **read replica** instead of your main database compute. BI tools often run long or resource-intensive queries, which can impact performance on your primary branch. Read replicas can scale independently and handle these workloads without affecting your main production traffic. To learn more, see [Neon read replicas](https://neon.com/docs/introduction/read-replicas). ## Connection issues Applications that use older client libraries or drivers that do not support Server Name Indication (SNI) may not permit connecting to Neon. If you encounter the following error, refer to [Connection errors](https://neon.com/docs/connect/connection-errors) for possible workarounds. ```txt ERROR: The endpoint ID is not specified. Either upgrade the Postgres client library (libpq) for SNI support or pass the endpoint ID (the first part of the domain name) as a parameter: '&options=endpoint%3D'. See [https://neon.com/sni](/sni) for more information. ``` --- # Source: https://neon.com/llms/connect-connect-securely.txt # Connect to Neon securely > The document outlines secure connection methods to Neon databases, detailing authentication processes and encryption protocols to ensure data integrity and confidentiality for users. ## Source - [Connect to Neon securely HTML](https://neon.com/docs/connect/connect-securely): The original HTML version of this documentation Neon requires that all connections use SSL/TLS encryption to ensure that data sent over the Internet cannot be viewed or manipulated by third parties. Neon rejects connections that do not use SSL/TLS, behaving in the same way as standalone Postgres with only `hostssl` records in a `pg_hba.conf` configuration file. However, there are different levels of protection when using SSL/TLS encryption, which you can configure by appending an `sslmode` parameter to your connection string. ## Connection modes When connecting to Neon or any Postgres database, the `sslmode` parameter setting determines the security of the connection. You can append the `sslmode` parameter to your Neon connection string as shown: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=verify-full ``` Neon supports the following `sslmode` settings, in order of least to most secure. | sslmode | Description | | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `require` | Encryption is required and the server's SSL/TLS certificate is verified. If verification fails, the connection is refused. | | `verify-ca` | Encryption is required and the server's SSL/TLS certificate is verified. In addition, the client verifies that the server's certificate has been signed by a trusted certificate authority (CA). | | `verify-full` | Encryption is required and the server's SSL/TLS certificate is fully verified, including hostname verification, expiration checks, and revocation checks. In addition, the client verifies that the server's certificate has been signed by a trusted certificate authority (CA). | The choice of which mode to use depends on the specific security requirements of the application and the level of risk that you are willing to tolerate. Neon recommends that you always use `verify-full` mode, which ensures the highest level of security and protects against a wide range of attacks including man-in-the-middle attacks. The following sections describe how to configure connections using `verify-full` mode. ## Channel binding for enhanced security `channel_binding=require` is a security parameter that ensures the client and server mutually authenticate each other using SCRAM-SHA-256-PLUS. This helps protect against man-in-the-middle attacks, even when using `sslmode=require` alone. The required configuration for your connection depends on the client you are using. ## Connect from the psql client To connect from the `psql` command-line client with `sslmode=verify-full`, provide the path to your system root certificates by setting the `PGSSLROOTCERT` variable to the location of your operating system's root certificates. You can set this environment variable in your shell, typically bash or similar, using the export command. For example, if your root certificate is at `/path/to/root.crt`, you would set the variable like so: ```bash export PGSSLROOTCERT="/path/to/your/root.crt" ``` Refer to [Location of system root certificates](https://neon.com/docs/connect/connect-securely#location-of-system-root-certificates) below to find the path to system root certificates for your operating system. ## Connect from other clients If the client application uses a popular Postgres client library, such as `psycopg2` for Python or JDBC for Java, the library typically provides built-in support for SSL/TLS encryption and verification, allowing you to configure an `sslmode` setting in the connection parameters. For example: ```python import psycopg2 conn = psycopg2.connect( dbname='dbname', user='alex', password='AbC123dEf', host='ep-cool-darkness-123456.us-east-2.aws.neon.tech', port='5432', sslmode='verify-full', sslrootcert='/path/to/your/root.crt' ) ``` However, if your client application uses a non-standard Postgres client, SSL/TLS may not be enabled by default. In this case, you must manually configure the client to use SSL/TLS and specify an `sslmode` configuration. Refer to the client or the client's driver documentation for how to configure the path to your operating system's root certificates. ### Location of system root certificates Neon uses the public ISRG Root X1 certificate issued by [Let's Encrypt](https://letsencrypt.org/). You can find the PEM-encoded certificate here: [isrgrootx1.pem](https://letsencrypt.org/certs/isrgrootx1.pem). Typically, you do not need to download this file directly, as it is usually available in a root store on your operating system. A root store is a collection of pre-downloaded root certificates from various Certificate Authorities (CAs). These are highly trusted CAs, and their certificates are typically shipped with operating systems and some applications. The location of the root store varies by operating system or distribution. Here are some locations where you might find the required root certificates on popular operating systems: - Debian, Ubuntu, Gentoo, etc. ```bash /etc/ssl/certs/ca-certificates.crt ``` - CentOS, Fedora, RedHat ```bash /etc/pki/tls/certs/ca-bundle.crt ``` - OpenSUSE ```bash /etc/ssl/ca-bundle.pem ``` - Alpine Linux ```bash /etc/ssl/cert.pem ``` - Android ```bash /system/etc/security/cacerts ``` - macOS: ```bash /etc/ssl/cert.pem ``` - Windows Windows does not provide a file containing the CA roots that can be used by your driver. However, many popular programming languages used on Windows like C#, Java, or Go do not require the CA root path to be specified and will use the Windows internal system roots by default. However, if you are using a language that requires specifying the CA root path, such as C or PHP, you can obtain a bundle of root certificates from the Mozilla CA Certificate program provided by the Curl project. You can download the bundle at [https://curl.se/docs/caextract.html](https://curl.se/docs/caextract.html). After downloading the file, you will need to configure your driver to point to the bundle. The system root certificate locations listed above may differ depending on the version, distribution, and configuration of your operating system. If you do not find the root certificates in these locations, refer to your operating system documentation. --- # Source: https://neon.com/llms/connect-connection-errors.txt # Connection errors > The "Connection errors" document outlines troubleshooting steps and solutions for resolving connection issues when accessing Neon databases, detailing common error messages and their corresponding fixes. ## Source - [Connection errors HTML](https://neon.com/docs/connect/connection-errors): The original HTML version of this documentation This topic describes how to resolve connection errors you may encounter when using Neon. The errors covered include: - [The endpoint ID is not specified](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified) - [Password authentication failed for user](https://neon.com/docs/connect/connection-errors#password-authentication-failed-for-user) - [Couldn't connect to compute node](https://neon.com/docs/connect/connection-errors#couldnt-connect-to-compute-node) - [Can't reach database server](https://neon.com/docs/connect/connection-errors#cant-reach-database-server) - [Error undefined: Database error](https://neon.com/docs/connect/connection-errors#error-undefined-database-error) - [Terminating connection due to administrator command](https://neon.com/docs/connect/connection-errors#terminating-connection-due-to-administrator-command) - [Unsupported startup parameter](https://neon.com/docs/connect/connection-errors#unsupported-startup-parameter) - [You have exceeded the limit of concurrently active endpoints](https://neon.com/docs/connect/connection-errors#you-have-exceeded-the-limit-of-concurrently-active-endpoints) - [Remaining connection slots are reserved for roles with the SUPERUSER attribute](https://neon.com/docs/connect/connection-errors#remaining-connection-slots-are-reserved-for-roles-with-the-superuser-attribute) - [Relation not found](https://neon.com/docs/connect/connection-errors#relation-not-found) - [Postgrex: DBConnection ConnectionError ssl send: closed](https://neon.com/docs/connect/connection-errors#postgrex-dbconnection-connectionerror-ssl-send-closed) - [query_wait_timeout SSL connection has been closed unexpectedly](https://neon.com/docs/connect/connection-errors#querywaittimeout-ssl-connection-has-been-closed-unexpectedly) - [The request could not be authorized due to an internal error](https://neon.com/docs/connect/connection-errors#the-request-could-not-be-authorized-due-to-an-internal-error) - [Terminating connection due to idle-in-transaction timeout](https://neon.com/docs/connect/connection-errors#terminating-connection-due-to-idle-in-transaction-timeout) - [DNS resolution issues](https://neon.com/docs/connect/connection-errors#dns-resolution-issues) **Info**: Connection problems are sometimes related to a system issue. To check for system issues, please refer to the [Neon status page](https://neonstatus.com/). ## The endpoint ID is not specified With older clients and some native Postgres clients, you may receive the following error when attempting to connect to Neon: ```txt ERROR: The endpoint ID is not specified. Either upgrade the Postgres client library (libpq) for SNI support or pass the endpoint ID (the first part of the domain name) as a parameter: '&options=endpoint%3D'. See [https://neon.com/sni](/sni) for more information. ``` This error occurs if your client library or application does not support the **Server Name Indication (SNI)** mechanism in TLS. Neon uses compute IDs (the first part of a Neon domain name) to route incoming connections. However, the Postgres wire protocol does not transfer domain name information, so Neon relies on the Server Name Indication (SNI) extension of the TLS protocol to do this. SNI support was added to `libpq` (the official Postgres client library) in Postgres 14, which was released in September 2021. Clients that use your system's `libpq` library should work if your Postgres version is >= 14. On Linux and macOS, you can check Postgres version by running `pg_config --version`. On Windows, check the `libpq.dll` version in your Postgres installation's `bin` directory. Right-click on the file, select **Properties** > **Details**. If a library or application upgrade does not help, there are several workarounds, described below, for providing the required domain name information when connecting to Neon. ### A. Pass the endpoint ID as an option Neon supports a connection option named `endpoint`, which you can use to identify the compute you are connecting to. Specifically, you can add `options=endpoint%3D[endpoint_id]` as a parameter to your connection string, as shown in the example below. The `%3D` is a URL-encoded `=` sign. Replace `[endpoint_id]` with your compute's ID, which you can find in your Neon connection string. It looks similar to this: `ep-cool-darkness-123456`. ```txt postgresql://[user]:[password]@[neon_hostname]/[dbname]?options=endpoint%3D[endpoint-id] ``` **Note**: The `endpoint` connection option was previously named `project`. The `project` option is deprecated but remains supported for backward compatibility. The `endpoint` option works if your application or library permits it to be set. Not all of them do, especially in the case of GUI applications. ### B. Use libpq key=value syntax in the database field If your application or client is based on `libpq` but you cannot upgrade the library, such as when the library is compiled inside of a an application, you can take advantage of the fact that `libpq` permits adding options to the database name. So, in addition to the database name, you can specify the `endpoint` option, as shown below. Replace `[endpoint_id]` with your compute's endpoint ID, which you can find in your Neon connection string. It looks similar to this: `ep-cool-darkness-123456`. ```txt dbname=neondb options=endpoint=[endpoint_id] ``` ### C. Set verify-full for golang-based clients If your application or service uses golang Postgres clients like `pgx` and `lib/pg`, you can set `sslmode=verify-full`, which causes SNI information to be sent when you connect. Most likely, this behavior is not intended but happens inadvertently due to the golang's TLS library API design. ### D. Specify the endpoint ID in the password field Another supported workaround involves specifying the endpoint ID in the password field. So, instead of specifying only your password, you provide a string consisting of the `endpoint` option and your password, separated by a semicolon (`;`) or dollar sign character (`$`), as shown in the examples below. Replace `[endpoint_id]` with your compute's endpoint ID, which you can find in your Neon connection string. It looks similar to this: `ep-cool-darkness-123456`. ```txt endpoint=; ``` or ```txt endpoint=$ ``` Example: ```txt postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Note**: Using a dollar sign (`$`) character as a separator may be required if a semicolon (`;`) is not a permitted character in a password field. For example, the [AWS Database Migration Service (DMS)](https://aws.amazon.com/dms/) does not permit a semicolon character in the **Password** field when defining connection details for database endpoints. This approach causes the authentication method to be downgraded from `scram-sha-256` (never transfers a plain text password) to `password` (transfers a plain text password). However, the connection is still TLS-encrypted, so the level of security is equivalent to the security provided by `https` websites as long as `sslmode=verify-full` or channel binding is used. We intend deprecate this option when most libraries and applications provide SNI support. ### Libraries Clients on the [list of drivers](https://wiki.postgresql.org/wiki/List_of_drivers) on the PostgreSQL community wiki that use your system's `libpq` library should work if your `libpq` version is >= 14. Neon has tested the following drivers for SNI support: | Driver | Language | SNI Support | Notes | | ----------------- | ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | npgsql | C# | ✓ | | | Postgrex | Elixir | ✓ | [Requires ssl_opts with server_name_indication](https://neon.com/docs/guides/elixir-ecto#configure-ecto) | | github.com/lib/pq | Go | ✓ | Supported with macOS Build 436, Windows Build 202, and Ubuntu 20, 21 and 22 (Deprecated, use pgx instead) | | pgx | Go | ✓ | Recommended driver for Go. SNI support available in v5.0.0-beta.3 and later | | go-pg | Go | ✓ | requires `verify-full` mode | | JDBC | Java | ✓ | | | node-postgres | JavaScript | ✓ | Requires the `ssl: {'sslmode': 'require'}` option | | postgres.js | JavaScript | ✓ | Requires the `ssl: 'require'` option | | asyncpg | Python | ✓ | | | pg8000 | Python | ✓ | Requires [scramp >= v1.4.3](https://pypi.org/project/scramp/), which is included in [pg8000 v1.29.3](https://pypi.org/project/pg8000/) and higher | | PostgresClientKit | Swift | ✗ | | | PostgresNIO | Swift | ✓ | | | postgresql-client | TypeScript | ✓ | | ## Password authentication failed for user The following error is often the result of an incorrectly defined connection information, or the driver you are using does not support Server Name Indication (SNI). ```text ERROR: password authentication failed for user '' connection to server at "ep-billowing-fun-123456.us-west-2.aws.neon.tech" (12.345.67.89), port 5432 failed: ERROR: connection is insecure (try using `sslmode=require&channel_binding=require`) ``` Check your connection to see if it is defined correctly. Your Neon connection string can be obtained by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It appears similar to this: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` For clients or applications that require specifying connection parameters such as user, password, and hostname separately, the values in a Neon connection string correspond to the following: - **User**: `daniel` - **Password**: `f74wh99w398H` - **Hostname**: `ep-white-morning-123456.us-east-2.aws.neon.tech` - **Port number**: `5432` (Neon uses default Postgres port, `5432`, and is therefore not included in the connection string) - **Database name**: `neondb` (`neondb` is the ready-to-use database created with each Neon project. Your database name may differ.) If you find that your connection string is defined correctly, see the instructions regarding SNI support outlined in the preceding section: [The endpoint ID is not specified](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified). ## Couldn't connect to compute node This error arises when the Neon proxy, which accepts and handles connections from clients that use the Postgres protocol, fails to establish a connection with your compute. This issue sometimes occurs due to repeated connection attempts during the compute's restart phase after it has been idle due to [scale to zero](https://neon.com/docs/reference/glossary#scale-to-zero). The transition from an idle to an active state only takes a few hundred milliseconds. Consider these recommended steps: - Visit the [Neon status page](https://neonstatus.com/) to ensure there are no ongoing issues. - Pause for a short period to allow your compute to restart, then try reconnecting. - Try [connecting with psql](https://neon.com/docs/connect/query-with-psql-editor) to see if a connection can be established. - Review the strategies in [Connection latency and timeouts](https://neon.com/docs/connect/connection-latency) for avoiding connection issues due to compute startup time. If the connection issue persists, please reach out to [Support](https://neon.com/docs/introduction/support). ## Can't reach database server This error is sometimes encountered when using Prisma Client with Neon. ```text Error: P1001: Can't reach database server at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432` Please make sure your database server is running at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432`. ``` A compute in Neon has two main states: **Active** and **Idle**. Active means that Postgres is currently running. If there are no active queries for 5 minutes, the activity monitor gracefully places the compute into an idle state to reduce compute usage. When you connect to an idle compute, Neon automatically activates it. Activation typically happens within a few seconds. If the error above is reported, it most likely means that the Prisma query engine timed out before your Neon compute was activated. For dealing with this connection timeout scenario, refer to the [connection timeout](https://neon.com/docs/guides/prisma#connection-timeouts) instructions in our Prisma documentation. Our [connection latency and timeout](https://neon.com/docs/connect/connection-latency) documentation may also be useful in addressing this issue. ## Error undefined: Database error This error is sometimes encountered when using Prisma Migrate with Neon. ```text Error undefined: Database error Error querying the database: db error: ERROR: prepared statement "s0" already exists ``` Prisma Migrate requires a direct connection to the database. It does not support a pooled connection with PgBouncer, which is the connection pooler used by Neon. Attempting to run Prisma Migrate commands, such as `prisma migrate dev`, with a pooled connection causes this error. To resolve this issue, please refer to our [Connection pooling with Prisma Migrate](https://neon.com/docs/guides/prisma#connect-pooling-with-prisma-migrate) instructions. ## Terminating connection due to administrator command The `terminating connection due to administrator command` error is typically encountered when running a query from a connection that has sat idle long enough for the compute to suspend due to inactivity. Neon automatically suspends a compute after 5 minutes of inactivity, by default. You can reproduce this error by connecting to your database from an application or client such as `psql`, letting the connection remain idle until the compute suspends, and then running a query from the same connection. If you encounter this error, you can try adjusting the timing of your query or reestablishing the connection before running the query. Alternatively, if you are on a paid plan, you can disable scale to zero. For instructions, see [Configuring scale to zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). [Free plan](https://neon.com/docs/introduction/plans) users cannot disable scale to zero. ## Unsupported startup parameter This error is reported in two variations: ```text unsupported startup parameter: <...> ``` ```text unsupported startup parameter in options: <...> ``` The error occurs when using a pooled Neon connection string with startup options that are not supported by PgBouncer. PgBouncer allows only startup parameters it can keep track of in startup packets. These include: `client_encoding`, `datestyle`, `timezone`, `standard_conforming_strings`, and `application_name`. See **track_extra_parameters**, in the [PgBouncer documentation](https://www.pgbouncer.org/config.html#track_extra_parameters). To resolve this error, you can either remove the unsupported parameter from your connection string or use an unpooled Neon connection string. For information about pooled and unpooled connections in Neon, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## You have exceeded the limit of concurrently active endpoints This error can also appear as: `active endpoints limit exceeded`. Neon has a default limit of 20 concurrently active computes to prevent resource exhaustion. The compute associated with the default branch is exempt from the [concurrently active compute limit](https://neon.com/docs/reference/glossary#concurrently-active-compute-limit), ensuring that it is always available. When you exceed the limit, additional computes beyond the limit will remain suspended and you will see this error when attempting to connect to them. You can suspend other active computes and try again. Alternatively, if you encounter this error often, you can reach out to [Support](https://neon.com/docs/introduction/support) to request a `max_active_endpoints` limit increase. ## Remaining connection slots are reserved for roles with the SUPERUSER attribute This error occurs when the maximum number of simultaneous database connections, defined by the Postgres `max_connections` setting, is reached. To resolve this issue, you have several options: - Find and remove long-running or idle connections. See [Find long-running or idle connections](https://neon.com/docs/postgresql/query-reference#find-long-running-or-idle-connections). - Use a larger compute, with a higher `max_connections` configuration. See [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). - Enable [connection pooling](https://neon.com/docs/connect/connection-pooling). If you are already using connection pooling, you may need to reach out to Neon Support to request a higher `default_pool_size` setting for PgBouncer. See [Neon PgBouncer configuration settings for more information](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings). ## Relation not found This error is often encountered when attempting to set the Postgres `search_path` session variable using a `SET search_path` statement over a pooled connection. For more information and workarounds, please see [Connection pooling in transaction mode](https://neon.com/docs/connect/connection-pooling#connection-pooling-in-transaction-mode). ## Postgrex: DBConnection ConnectionError ssl send: closed Postgrex has an `:idle_interval` connection parameter that defines an interval for pinging connections after a period of inactivity. The default setting is `1000ms`. If you rely on Neon's [autosuspend](https://neon.com/docs/introduction/auto-suspend) feature to scale your compute to zero when your database is not active, this setting will prevent that and you may encounter a `(DBConnection.ConnectionError) ssl send: closed (ecto_sql 3.12.0)` error as a result. As a workaround, you can set the interval to a higher value to allow your Neon compute to suspend. For example: ```elixir config :app_name, AppName.Repo # normal connection options ... idle_interval: :timer.hours(24) ``` For additional details, refer to this discussion on our Discord server: [Compute not suspended due to Postgrex idle_interval setting](https://discord.com/channels/1176467419317940276/1295401751574351923/1295419826319265903) ## query_wait_timeout SSL connection has been closed unexpectedly The `query_wait_timeout` setting is a PgBouncer configuration option that determines the maximum time a query can wait in the queue before being executed. Neon's default value for this setting is **120 seconds**. If a query exceeds this timeout while in the queue, it will not be executed. For more details about this setting, refer to [Neon PgBouncer configuration settings](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings). To avoid this error, we recommend reviewing your workload. If it includes batch processing with `UPDATE` or `INSERT` statements, review their performance. Slow queries may be the root cause. Try optimizing these queries to reduce execution time, which can help prevent them from exceeding the timeout. Alternatively, Neon can increase the `query_wait_timeout` value for you, but this is not typically recommended, as increasing the timeout can lead to higher latency or blocked queries under heavy workloads. ## The request could not be authorized due to an internal error This error page in the Neon Console is most often the result of attempting to access a Neon project in one browser window after you've have logged in under a different Neon user account from another browser window. The error occurs because the currently logged in Neon user account does not have access to the Neon project. To avoid this issue, ensure that you're logged in with a Neon user account that has access to the Neon project you're trying to access. ## Terminating connection due to idle-in-transaction timeout This error occurs when a session remains idle within an open transaction for longer than the specified timeout period. By default, the `idle_in_transaction_session_timeout` setting is set to `5min` (300,000 milliseconds). This timeout helps prevent idle sessions from holding locks or contributing to table bloat. If you encounter this error, you can adjust the `idle_in_transaction_session_timeout` setting to a higher value or disable it entirely by setting it to `0`. Below are ways to change this setting: 1. Change at the session level: `SET idle_in_transaction_session_timeout = 0;` 2. Change at the database level: `ALTER DATABASE SET idle_in_transaction_session_timeout = 0;` (replace `` with the name of your database) 3. Change at the role level: `ALTER ROLE SET idle_in_transaction_session_timeout = 0;` (replace `` with the name of the user role) Be aware that leaving transactions idle for extended periods can prevent vacuuming and increase the number of open connections. Please use caution and consider only changing the value temporarily, as needed. ## DNS resolution issues Some users encounter DNS resolution failures when connecting to their Neon database. These issues are often reported when using the **Tables** page in the Neon Console. In such cases, users may see an **Unexpected error happened** message like the one below: To check for a DNS resolution issue, you can run `nslookup` on your Neon hostname, which is the part of your Neon database [connection string](https://neon.com/docs/reference/glossary#connection-string) starting with your endpoint ID (e.g., `ep-cool-darkness-a1b2c3d4`) and ending with `neon.tech`. For example: ```bash nslookup ep-cool-darkness-a1b2c3d4.ap-southeast-1.aws.neon.tech ``` If the Neon hostname resolves correctly, you'll see output similar to this: ```bash nslookup ep-cool-darkness-a1b2c3d4.ap-southeast-1.aws.neon.tech Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: p-cool-darkness-a1b2c3d4.ap-southeast-1.aws.neon.tech canonical name = ap-southeast-1.aws.neon.tech. Name: ap-southeast-1.aws.neon.tech Address: 203.0.113.10 Name: ap-southeast-1.aws.neon.tech Address: 203.0.113.20 Name: ap-southeast-1.aws.neon.tech Address: 203.0.113.30 ``` If the hostname does not resolve, you might see an error like this, where the DNS query is refused: ```bash ** server can't find ep-cool-darkness-a1b2c3d4.ap-southeast-1.aws.neon.tech: REFUSED ``` To verify that it's a DNS resolution issue, run the following test using a public DNS resolver, such as Google DNS: ```bash nslookup ep-cool-darkness-a1b2c3d4.ap-southeast-1.aws.neon.tech 8.8.8.8 ``` If this succeeds, it's very likely a DNS resolution issue. **Cause** Failure to resolve the Neon hostname can happen for different reasons: - Regional DNS caching or propagation delays - Restrictive or misconfigured DNS resolvers (such as those provided by your ISP) - System-wide web proxy settings that interfere with DNS resolution **Workarounds** 1. **Using a Public DNS Resolver** - Google DNS: 8.8.8.8, 8.8.4.4 - Cloudflare DNS: 1.1.1.1, 1.0.0.1 These can be changed at: - OS level (macOS, Windows, Linux) - Router level - Mobile device network settings - Android Private DNS (configure a trusted provider such as `dns.google` or `1dot1dot1dot1.cloudflare-dns.com`) To change your DNS configuration at the OS level: - **macOS**: System Settings → Network → Wi-Fi → Details → DNS - **Windows**: Control Panel → Network and Internet → Network Connections → Right-click your connection → Properties → Internet Protocol Version 4 (TCP/IPv4) - **Linux**: Edit `/etc/resolv.conf` or configure your network manager (e.g., NetworkManager, Netplan) This article provides detailed instructions: [How to Turn on Private DNS Mode](https://news.trendmicro.com/2023/03/21/how-to-turn-on-private-dns-mode/) 2. **Disable system-wide web proxies** If you're using a proxy configured at the OS level, it may interfere with DNS lookups. To check and disable system proxy settings: - **macOS**: System Settings → Network → Wi-Fi → Details → Proxies. Uncheck any active proxy options (e.g., "Web Proxy (HTTP)", "Secure Web Proxy (HTTPS)") - **Windows**: Settings → Network & Internet → Proxy. Turn off "Use a proxy server" if it's enabled - **Linux**: Check your environment variables (e.g., `http_proxy`, `https_proxy`) and system settings under Network/Proxy. 3. **Using a VPN** Using a VPN routes DNS queries through a different resolver and often bypasses the issue entirely. --- # Source: https://neon.com/llms/connect-connection-latency.txt # Connection latency and timeouts > The document outlines the factors affecting connection latency and timeouts in Neon, detailing how to measure and optimize these parameters for improved database performance. ## Source - [Connection latency and timeouts HTML](https://neon.com/docs/connect/connection-latency): The original HTML version of this documentation Neon's _Scale to zero_ feature is designed to minimize costs by automatically scaling a compute resource down to zero after a period of inactivity. By default, Neon scales a compute to zero after 5 minutes of inactivity. A characteristic of this feature is the concept of a "cold start". During this process, a compute transitions from an idle state to an active state to process requests. Currently, activating a Neon compute from an idle state typically takes a few hundred milliseconds not counting other factors that can add to latencies such as the physical distance between your application and database or startup times of other services that participate in your connection process. **Note**: Services you integrate with Neon may also have startup times, which can add to connection latencies. This topic does not address latencies of other vendors, but if your application connects to Neon via another service, remember to consider startup times for those services as well. ## Check the status of a compute You can check the current status of a compute on the **Branches** page in the Neon Console. A compute will report either an **Active** or **Idle** status. You can also view compute state transitions in the **Branches** widget on the Neon **Dashboard**. User actions that activate an idle compute include connecting from a client or application, running a query on your database from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), or accessing the compute via the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Info**: The Neon API includes [Start endpoint](https://api-docs.neon.tech/reference/startprojectendpoint) and [Suspend endpoint](https://api-docs.neon.tech/reference/startprojectendpoint) APIs for the specific purpose of activating and suspending a compute. You can try any of these methods and watch the status of your compute as it changes from an **Idle** to an **Active** state. By default, a compute is suspended after 300 seconds (5 minutes) of inactivity. Users on the Neon [Scale plan](https://neon.com/docs/introduction/plans) can configure this delay period, which is described later in this topic. ## Strategies for managing latency and timeouts Given the potential impact on application responsiveness, it's important to have strategies in place to manage connection latencies and timeouts. Here are some methods you can implement: - [Adjust your Scale to zero configuration](https://neon.com/docs/connect/connection-latency#adjust-your-scale-to-zero-configuration) - [Place your application and database in the same region](https://neon.com/docs/connect/connection-latency#place-your-application-and-database-in-the-same-region) - [Increase your connection timeout](https://neon.com/docs/connect/connection-latency#increase-your-connection-timeout) - [Build connection timeout handling into your application](https://neon.com/docs/connect/connection-latency#build-connection-timeout-handling-into-your-application) - [Use application-level caching](https://neon.com/docs/connect/connection-latency#use-application-level-caching) ### Adjust your scale to zero configuration Users on paid plans can configure the length of time that the system remains in an inactive state before Neon scales your compute down to zero. This lets you set the balance between performance (never scaling down) and cost (scaling to zero at reasonable intervals). The scale to zero setting is set to 5 minutes by default. You can set a custom period of up to a maximum of 7 days, or disable scale to zero entirely. To disable scale to zero, see [Edit a compute](https://neon.com/docs/manage/endpoints#edit-a-compute). **Important**: If you disable scale to zero entirely or your compute is never idle long enough to be automatically suspended, you will have to manually restart your compute to pick up the latest updates to Neon's compute images. Neon typically releases compute-related updates weekly. Not all releases contain critical updates, but a weekly compute restart is recommended to ensure that you do not miss anything important. For how to restart a compute, see [Restart a compute](https://neon.com/docs/manage/endpoints#restart-a-compute). To configure a custom scale to zero setting, modify `suspend_timeout_seconds` using the [Update compute endpoint API](https://api-docs.neon.tech/reference/updateprojectendpoint) API, as shown below. To use this API, you need to specify your project ID and compute endpoint ID. You can find your project ID in your project's settings. You can find the compute endpoint ID on your branch page. ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/{project_id}/endpoints/{endpoint_id} \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "endpoint": { "suspend_timeout_seconds": 300 } } ' ``` Consider combining this strategy with Neon's _Autoscaling_ feature, which allows you to run a compute with minimal resources and scale up on demand. For example, with autoscaling, you can configure a minimum compute size to reduce costs during off-peak times. In the image shown below, the scale to zero setting is set to 1 hour so that your compute only suspends after an hour of inactivity, and autoscaling is configured with a minimum compute size that keep costs low during periods of light usage. For autoscaling configuration instructions, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). ### Place your application and database in the same region A key strategy for reducing connection latency is ensuring that your application and database are hosted in the same region, or as close as possible, geographically. For the regions supported by Neon, see [Regions](https://neon.com/docs/introduction/regions). For information about moving your database to a different region, see [Import data from another Neon project](https://neon.com/docs/import/migrate-from-neon). ### Increase your connection timeout By configuring longer connection timeout durations, your application has more time to accommodate cold starts and other factors that contribute to latency. Connection timeout settings are typically configured in your application or the database client library you're using, and the specific way to do it depends on the language or framework you're using. Here are examples of how to increase connection timeout settings in a few common programming languages and frameworks: Tab: Node.js ```javascript const { Pool } = require('pg'); const pool = new Pool({ connectionString: process.env.DATABASE_URL, connectionTimeoutMillis: 10000, // connection timeout in milliseconds idleTimeoutMillis: 10000, // idle timeout in milliseconds }); ``` Tab: Python ```python import psycopg2 from psycopg2 import connect from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT import os DATABASE_URL = os.environ['DATABASE_URL'] conn = psycopg2.connect(DATABASE_URL, connect_timeout=10) ``` Tab: Java ```java import java.sql.Connection; import java.sql.DriverManager; import java.util.Properties; String dbUrl = System.getenv("DATABASE_URL"); Properties properties = new Properties(); properties.setProperty("connectTimeout", "10"); Connection conn = DriverManager.getConnection(dbUrl, properties); ``` Tab: Prisma ```prisma DATABASE_URL=postgresql://[user]:[password]@[neon_hostname]/[dbname]?connect_timeout=15&pool_timeout=15` ``` **Note**: If you are using Prisma Client, your timeout issue could be related to Prisma's connection pool configuration. The Prisma Client query engine instantiates its own connection pool when it opens a first connection to the database. If you encounter a `Timed out fetching a new connection from the connection pool` error, refer to [Prisma connection pool timeouts](https://neon.com/docs/guides/prisma#connection-pool-timeouts) for information about configuring your Prisma connection pool size and pool timeout settings. Remember that increasing connection timeout settings might impact the responsiveness of your application, and users could end up waiting longer for their requests to be processed. Always test and monitor your application's performance when making changes like these. ### Build connection timeout handling into your application You can prepare your application to handle connection timeouts when latency is unavoidable. This might involve using retries with exponential backoff. This Javascript example connects to the database using the `pg` library and uses the `node-retry` library to handle connection retries with an exponential backoff. The general logic can be easily translated into other languages. ```javascript require('dotenv').config(); var Client = require('pg').Client; var retry = require('retry'); // Connection string from .env file var connectionString = process.env.DATABASE_URL; function connectWithRetry() { var operation = retry.operation({ retries: 5, // number of retries before giving up minTimeout: 4000, // minimum time between retries in milliseconds randomize: true, // adds randomness to timeouts to prevent retries from overwhelming the server }); operation.attempt(function (currentAttempt) { var client = new Client({ connectionString }); client .connect() .then(function () { console.log('Connected to the database'); // Perform your operations with the client // For example, let's run a simple SELECT query return client.query('SELECT NOW()'); }) .then(function (res) { console.log(res.rows[0]); return client.end(); }) .catch(function (err) { if (operation.retry(err)) { console.warn(`Failed to connect on attempt ${currentAttempt}, retrying...`); } else { console.error('Failed to connect to the database after multiple attempts:', err); } }); }); } // Usage connectWithRetry(); ``` In the example above, the `operation.attempt` function initiates the connection logic. If the connection fails (i.e., `client.connect()` returns a rejected Promise), the error is passed to `operation.retry`(err). If there are retries left, the retry function schedules another attempt with a delay based on the parameters defined in the `retry.operation`. The delay between retries is controlled by the `minTimeout` and `randomize` options. The randomize option adds a degree of randomness to the delay to prevent a large number of retries from potentially overwhelming the server. The `minTimeout` option defines the minimum time between retries in milliseconds. However, this example is a simplification. In a production application, you might want to use a more sophisticated strategy. For example, you could initially attempt to reconnect quickly in the event of a transient network issue, then fall back to slower retries if the problem persists. #### Connection retry references - [SQL Alchemy: Dealing with disconnects](https://arc.net/l/quote/nojcaewr) - [Fast API blog post: Recycling connections for Neon's scale to zero](https://neon.com/blog/deploy-a-serverless-fastapi-app-with-neon-postgres-and-aws-app-runner-at-any-scale) ### Use application-level caching Implement a caching system like [Redis](https://redis.io/) to store frequently accessed data, which can be rapidly served to users. This approach can help reduce occurrences of latency, but only if the data requested is available in the cache. Challenges with this strategy include cache invalidation due to frequently changing data, and cache misses when queries request uncached data. This strategy will not avoid latency entirely, but you may be able to combine it with other strategies to improve application responsiveness overall. ### Optimizing connection latency with sslnegotiation Starting with PostgreSQL 17, you can use the `sslnegotiation` connection parameter to control how SSL negotiation is handled when establishing a connection. The `sslnegotiation=direct` option reduces connection latency by skipping unnecessary negotiation steps. Neon has implemented support for `sslnegotiation=direct` in our proxy layer, allowing you to benefit from faster connection times even if your database runs on an older PostgreSQL version. You just need a PostgreSQL 17 client to use this feature. Here's a comparison of connection times with and without the `sslnegotiation=direct` parameter: **Without sslnegotiation=direct:** ```bash $ time psql "postgresql://neondb_owner@your-neon-endpoint/neondb?sslmode=require&channel_binding=require" -c "SELECT version();" version --------------------------------------------------------------------------------------------------------- PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit (1 row) real 0m0.872s user 0m0.019s sys 0m0.000s ``` **With sslnegotiation=direct:** ```bash $ time psql "postgresql://neondb_owner@your-neon-endpoint/neondb?sslmode=require&channel_binding=require&sslnegotiation=direct" -c "SELECT version();" version --------------------------------------------------------------------------------------------------------- PostgreSQL 17.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit (1 row) real 0m0.753s user 0m0.016s sys 0m0.005s ``` As shown in the example above, using `sslnegotiation=direct` reduces the connection time by skipping the initial SSL negotiation step. To use this optimization, simply append `sslnegotiation=direct` to your connection string: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=verify-full&sslnegotiation=direct ``` ## Latency benchmarking See [Benchmarking latency in Neon's serverless Postgres](https://neon.com/docs/guides/benchmarking-latency) to learn how to measure and optimize query latency in your Neon database. ## Conclusion With the right strategies, you can optimize your system to handle connection latencies and timeouts, ensuring your application delivers a consistently high level of performance. The best solution often involves a combination of strategies, so experiment and find the right configuration for your specific use case. ## Related resources - [Neon latency benchmarks dashboard](https://neon.com/demos/regional-latency) - Interactive dashboard showing real-world latency measurements across different regions and workloads ([source code](https://github.com/neondatabase-labs/latency-benchmarks)) - [Connection pooling guide](https://neon.com/docs/connect/connection-pooling) - Reduce latency with efficient connection management - [Regional deployment options](https://neon.com/docs/introduction/regions) - Choose the optimal region for lowest latency - [Ship faster with Postgres](https://neon.tech/faster) - Explore examples and case studies demonstrating rapid development workflows --- # Source: https://neon.com/llms/connect-connection-pooling.txt # Connection pooling > The document explains connection pooling in Neon, detailing how it manages database connections efficiently to optimize resource usage and improve performance. ## Source - [Connection pooling HTML](https://neon.com/docs/connect/connection-pooling): The original HTML version of this documentation Neon uses [PgBouncer](https://www.pgbouncer.org/) to support connection pooling, enabling up to 10,000 concurrent connections. PgBouncer is a lightweight connection pooler for Postgres. ## How to use connection pooling To use connection pooling with Neon, use a pooled connection string instead of a regular connection string. A pooled connection string adds the `-pooler` option to your endpoint ID, as shown below: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` The **Connect to your database modal**, which you can access by clicking the **Connect** button on your **Project Dashboard**, provides **Connection pooling** toggle that adds the `-pooler` option to a connection string for you. You can copy a pooled connection string from the **Dashboard** or manually add the `-pooler` option to the endpoint ID in an existing connection string. **Info**: The `-pooler` option routes the connection to a connection pooling port at the Neon Proxy. ## Connection limits without connection pooling Each Postgres connection creates a new process in the operating system, which consumes resources. Postgres limits the number of open connections for this reason. The Postgres connection limit is defined by the Postgres `max_connections` parameter. In Neon, `max_connections` is set according to your compute size or autoscaling configuration — you can find the formula here: [Parameter settings that differ by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). | Compute size | vCPU | RAM | max_connections | | :----------- | :--- | :----- | :-------------- | | 0.25 | 0.25 | 1 GB | 112 | | 0.50 | 0.50 | 2 GB | 225 | | 1 | 1 | 4 GB | 450 | | 2 | 2 | 8 GB | 901 | | 3 | 3 | 12 GB | 1351 | | 4 | 4 | 16 GB | 1802 | | 5 | 5 | 20 GB | 2253 | | 6 | 6 | 24 GB | 2703 | | 7 | 7 | 28 GB | 3154 | | 8 | 8 | 32 GB | 3604 | | 9 | 9 | 36 GB | 4000 | | 10 | 10 | 40 GB | 4000 | | 11 | 11 | 44 GB | 4000 | | 12 | 12 | 48 GB | 4000 | | 13 | 13 | 52 GB | 4000 | | 14 | 14 | 56 GB | 4000 | | 15 | 15 | 60 GB | 4000 | | 16 | 16 | 64 GB | 4000 | | 18 | 18 | 72 GB | 4000 | | 20 | 20 | 80 GB | 4000 | | 22 | 22 | 88 GB | 4000 | | 24 | 24 | 96 GB | 4000 | | 26 | 26 | 104 GB | 4000 | | 28 | 28 | 112 GB | 4000 | | 30 | 30 | 120 GB | 4000 | | 32 | 32 | 128 GB | 4000 | | 34 | 34 | 136 GB | 4000 | | 36 | 36 | 144 GB | 4000 | | 38 | 38 | 152 GB | 4000 | | 40 | 40 | 160 GB | 4000 | | 42 | 42 | 168 GB | 4000 | | 44 | 44 | 176 GB | 4000 | | 46 | 46 | 184 GB | 4000 | | 48 | 48 | 192 GB | 4000 | | 50 | 50 | 200 GB | 4000 | | 52 | 52 | 208 GB | 4000 | | 54 | 54 | 216 GB | 4000 | | 56 | 56 | 224 GB | 4000 | You can check the `max_connections` limit for your compute by running the following query from the Neon SQL Editor or a client connected to Neon: ```sql SHOW max_connections; ``` **Note**: Seven connections are reserved for the Neon-managed Postgres `superuser` account. For example, for a 0.25 compute size, 7/112 connections are reserved, so you would only have 105 available connections. If you are running queries from the Neon SQL Editor, that will also use a connection. To view connections that are currently open, you can run the following query: ```sql SELECT usename FROM pg_stat_activity WHERE datname = ''; ``` Even with the largest compute size, the `max_connections` limit may not be sufficient for some applications, such as those that use serverless functions. To increase the number of connections that Neon supports, you can use _connection pooling_. All Neon plans, including the [Free plan](https://neon.com/docs/introduction/plans), support connection pooling. ## Connection pooling Some applications open numerous connections, with most eventually becoming inactive. This behavior can often be attributed to database driver limitations, running many instances of an application, or applications with serverless functions. With regular Postgres, new connections are rejected when reaching the `max_connections` limit. To overcome this limitation, Neon supports connection pooling using [PgBouncer](https://www.pgbouncer.org/), which allows Neon to support up to 10,000 concurrent connections. Connection pooling, however, is not a magic bullet: As the name implies, connections share a pool of connections to Postgres — a pool of connections sitting in front of a limited number of direct connections to Postgres. To ensure that direct access to Postgres is still possible for administrative tasks or similar, the pooler is configured to only open up a certain number of direct Postgres connections for each user to each database. This number of direct Postgres connections is determined by the PgBouncer [`default_pool_size`](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings) setting, which is in turn determined by your compute's `max_connections` setting. For example, if `default_pool_size` is _100_, there can be only _100_ active connections from role `alex` to any particular database through the pooler. All other connections by `alex` to that database will have to wait for one of those _100_ active connections to complete their transactions before the next connection's work is started. At the same time, role `dana` will also be able to connect to the same database through the pooler and have up to _100_ concurrent active transactions across the same number of connections. Similarly, even if role `alex` has _100_ concurrently active transactions through the pooler to the same database, that role can still start up to _100_ concurrent transactions to a different database when connected through the pooler. The `max_connections` setting still applies for direct Postgres connections. **Important**: You will not be able to get interactive results from all 10,000 connections at the same time. Connections to the pooler endpoint still consume connections on the main Postgres endpoint: PgBouncer forwards operations from a role's connections through its own pool of connections to Postgres, and adaptively adds more connections to Postgres as needed by other concurrently active role connections. The 10,000 connection limit is therefore most useful for "serverless" applications and application-side connection pools that have many open connections but infrequent and short [transactions](https://neon.com/docs/postgresql/query-reference#transactions). ## PgBouncer PgBouncer is an open-source connection pooler for Postgres. When an application needs to connect to a database, PgBouncer provides a connection from the pool. Connections in the pool are routed to a smaller number of actual Postgres connections. When a connection is no longer required, it is returned to the pool and is available to be used again. Maintaining a pool of available connections improves performance by reducing the number of connections that need to be created and torn down to service incoming requests. Connection pooling also helps avoid rejected connections. When all connections in the pool are being used, PgBouncer queues a new request until a connection from the pool becomes available. ## Neon PgBouncer configuration settings Neon's PgBouncer configuration is shown below. The settings are not user-configurable, but if you are a Neon [Scale plan](https://neon.com/docs/introduction/plans) user and require a different setting, please contact [Neon Support](https://neon.com/docs/introduction/support). For example, Neon sometimes raises the `default_pool_size` setting for users who support a large number of concurrent connections and repeatedly hit PgBouncer's pool size limit. ```ini [pgbouncer] pool_mode=transaction max_client_conn=10000 default_pool_size=0.9 * max_connections max_prepared_statements=1000 query_wait_timeout=120 ``` where `max_connections` is a Postgres setting. The following list describes each setting. For a full explanation of each parameter, please refer to the official [PgBouncer documentation](https://www.pgbouncer.org/config.html). - `pool_mode=transaction`: The pooling mode PgBouncer uses, set to `transaction` pooling. - `max_client_conn=10000`: Maximum number of client connections allowed. - `default_pool_size`: Default number of server connections to allow per user/database pair. The formula is 0.9 \* `max_connections`. For `max_connections` details, see [Parameter settings](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). - `max_prepared_statements=1000`: Maximum number of prepared statements a connection is allowed to have at the same time. `0` means prepared statements are disabled. - `query_wait_timeout=120`: Maximum time queries are allowed to spend waiting for execution. Neon uses the default setting of `120` seconds. ## Connection pooling in transaction mode As mentioned above, Neon uses PgBouncer in _transaction mode_ (`pool_mode=transaction`), which limits some functionality in Postgres. Functionality **NOT supported** in transaction mode includes: - `SET`/`RESET` - `LISTEN` - `WITH HOLD CURSOR` - `PREPARE / DEALLOCATE` - `PRESERVE` / `DELETE ROWS` temp tables - `LOAD` statement - Session-level advisory locks These session-level features are not supported _transaction mode_ because: 1. In this mode, database connections are allocated from the pool on a per-transaction basis 2. Session states are not persisted across transactions **Warning** Avoid using SET statements over a pooled connection: Due to the transaction mode limitation described above, users often encounter issues when running `SET` statements over a pooled connection. For example, if you set the Postgres `search_path` session variable using a `SET search_path` statement over a pooled connection, the setting is only valid for the duration of the transaction. As a result, a session variable like `search_path` will not remain set for subsequent transactions. This particular `search_path` issue often shows up as a `relation does not exist` error. To avoid this error, you can: - Use a direct connection string when you need to set the search path and have it persist across multiple transactions. - Explicitly specify the schema in your queries so that you don't need to set the search path. - Use an `ALTER ROLE your_role_name SET search_path TO , , ;` command to set a persistent search path for the role executing queries. See the [ALTER ROLE](https://www.postgresql.org/docs/current/sql-alterrole.html). Similar issues can occur when attempting to use `pg_dump` over a pooled connection. A `pg_dump` operation typically executes several `SET` statements during data ingestion, and these settings will not persist over a pool connection. For these reasons, we recommend using `pg_dump` only over a direct connection. For the official list of limitations, refer to the "_SQL feature map for pooling modes_" section in the [pgbouncer.org Features](https://www.pgbouncer.org/features.html) documentation. ## When to use direct connections While connection pooling is beneficial for most applications, certain operations require a direct (non-pooled) connection to Postgres: ### Schema migrations We recommend using a direct connection string when performing migrations using Object Relational Mappers (ORMs) and similar schema migration tools. With the exception of recent versions of [Prisma ORM, which support using a pooled connection string with Neon](https://neon.com/docs/guides/prisma#using-a-pooled-connection-with-prisma-migrate), using a pooled connection string for migrations is likely not supported or prone to errors. Before attempting to perform migrations over a pooled connection string, please refer to your tool's documentation to determine if pooled connections are supported. ### Logical replication Logical replication typically requires a persistent connection and is not compatible with connection poolers like PgBouncer. When configuring logical replication subscribers (such as Fivetran, Airbyte, or other CDC tools), always use a direct connection string. Make sure your connection string does not include `-pooler` in the hostname. For more information, see [Logical replication](https://neon.com/docs/guides/logical-replication-neon). ## Optimize queries with PgBouncer and prepared statements Protocol-level prepared statements are supported with Neon and PgBouncer as of the [PgBouncer 1.22.0 release](https://github.com/pgbouncer/pgbouncer/releases/tag/pgbouncer_1_21_0). Using prepared statements can help boost query performance while providing an added layer of protection against potential SQL injection attacks. ### Understanding prepared statements A prepared statement in Postgres allows for the optimization of an SQL query by defining its structure once and executing it multiple times with varied parameters. Here's an SQL-level example to illustrate. Note that direct SQL-level `PREPARE` and `EXECUTE` are not supported with PgBouncer (see [below](https://neon.com/docs/connect/connection-pooling#use-prepared-statements-with-pgbouncer)), so you can't use this query from the SQL Editor. It is meant to give you a clear idea of how a prepared statement works. Refer to the protocol-level samples below to see how this SQL-level example translates to different protocol-level examples. ```sql PREPARE fetch_plan (TEXT) AS SELECT * FROM users WHERE username = $1; EXECUTE fetch_plan('alice'); ``` `fetch_plan` here is the prepared statement's name, and `$1` acts as a parameter placeholder. The benefits of using prepared statements include: - **Performance**: Parsing the SQL and creating the execution plan happens just once, speeding up subsequent executions. This performance benefit would be most noticeable on databases with heavy and repeated traffic. - **Security**: By sending data values separately from the query, prepared statements reduce the risk of SQL injection attacks. You can learn more about prepared statements in the PostgreSQL documentation. See [PREPARE](https://www.postgresql.org/docs/current/sql-prepare.html). ### Use prepared statements with PgBouncer Since pgBouncer supports protocol-level prepared statements only, you must rely on PostgreSQL client libraries instead (direct SQL-level `PREPARE` and `EXECUTE` are not supported). Fortunately, most PostgreSQL client libraries support prepared statements. Here are a couple of examples showing how to use prepared statements with Javascript and Python client libraries: Tab: pg ```javascript const query = { // give the query a unique name name: 'fetch-plan', text: 'SELECT * FROM users WHERE username = $1', values: ['alice'], }; client.query(query); ``` Tab: psycopg2 ```python cur = conn.cursor() query = "SELECT * FROM users WHERE username = %s;" cur.execute(query, ('alice',), prepare=True) results = cur.fetchall() ``` --- # Source: https://neon.com/llms/connect-passwordless-connect.txt # Passwordless auth > The "Passwordless auth" documentation details the process for Neon users to connect to databases using passwordless authentication methods, enhancing security by utilizing alternative authentication mechanisms such as OAuth or other token-based systems. ## Source - [Passwordless auth HTML](https://neon.com/docs/connect/passwordless-connect): The original HTML version of this documentation Neon's `psql` passwordless auth feature helps you quickly authenticate a connection to Neon without providing a password. The following instructions require a working installation of [psql](https://www.postgresql.org/download/), an interactive terminal for working with Postgres. For information about `psql`, refer to the [psql reference](https://www.postgresql.org/docs/15/app-psql.html), in the _PostgreSQL Documentation_. To connect using Neon's `psql` passwordless auth feature: 1. In your terminal, run the following command: ```bash psql -h pg.neon.tech ``` A response similar to the following is displayed: ```bash NOTICE: Welcome to Neon! Authenticate by visiting (will expire in 2m): https://console.neon.tech/psql_session/cd6aebdc9fda9928 ``` 2. In your browser, navigate to the provided link. Log in to Neon if you are not already logged in. You are asked to select a Neon account and project (if you have multiple). If your project has more than one compute, you are also asked to select one. After confirming your selections, you are advised that you can return to your terminal or command window where information similar to the following is displayed: ```bash NOTICE: Connecting to database. psql (17.2) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off, ALPN: postgresql) Type "help" for help. casey=> ``` The passwordless auth feature connects to the first database created in the branch. To check the database you are connected to, issue this query: ```sql SELECT current_database(); current_database ------------------ neondb ``` Switching databases from the `psql` prompt (using `\c `, for example) after you have authenticated restarts the passwordless auth authentication process to authenticate a connection to the new database. ## Running queries After establishing a connection, try running the following queries to validate your database connection: ```sql CREATE TABLE my_table AS SELECT now(); SELECT * FROM my_table; ``` The following result set is returned: ```sql SELECT 1 now ------------------------------- 2022-09-11 23:12:15.083565+00 (1 row) ``` --- # Source: https://neon.com/llms/connect-query-with-psql-editor.txt # Connect with psql > The document details how to connect to a Neon database using the psql command-line tool, including steps for installation, configuration, and execution of SQL queries. ## Source - [Connect with psql HTML](https://neon.com/docs/connect/query-with-psql-editor): The original HTML version of this documentation The following instructions require a working installation of [psql](https://www.postgresql.org/download/). The `psql` client is the native command-line client for Postgres. It provides an interactive session for sending commands to Postgres and running ad-hoc queries. For more information about `psql`, refer to the [psql reference](https://www.postgresql.org/docs/15/app-psql.html), in the _PostgreSQL Documentation_. **Note**: A Neon compute runs Postgres, which means that any Postgres application or standard utility such as `psql` is compatible with Neon. You can also use Postgres client libraries and drivers to connect. However, please be aware that some older client libraries and drivers, including older `psql` executables, are built without [Server Name Indication (SNI)](https://neon.com/docs/reference/glossary#sni) support and require a workaround. For more information, see [Connection errors](https://neon.com/docs/connect/connection-errors). Neon also provides a passwordless auth feature that uses `psql`. For more information, see [Passwordless auth](https://neon.com/docs/connect/passwordless-connect). ## How to install psql If you don't have `psql` installed already, follow these steps to get set up: Tab: Mac (Intel x64) ```bash brew install libpq echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.zshrc source ~/.zshrc ``` Tab: Mac (Apple Silicon) ```bash brew install libpq echo 'export PATH="/opt/homebrew/opt/libpq/bin:$PATH"' >> ~/.zshrc source ~/.zshrc ``` Tab: Linux ```bash sudo apt update sudo apt install postgresql-client ``` Tab: Windows Download and install PostgreSQL from: https://www.postgresql.org/download/windows/ Ensure psql is included in the installation. ## Connect to Neon with psql The easiest way to connect to Neon using `psql` is with a connection string. You can obtain a connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a role, and the database you want to connect to. A connection string is constructed for you. From your terminal or command prompt, run the `psql` client with the connection string copied from the Neon **Dashboard**. ```bash psql postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` **Note**: Neon requires that all connections use SSL/TLS encryption, but you can increase the level of protection using the `sslmode` parameter setting in your connection string. For instructions, see [Connect to Neon securely](https://neon.com/docs/connect/connect-securely). ### Where do I obtain a password? You can obtain a Neon connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ### What port does Neon use? Neon uses the default Postgres port, `5432`. If you need to specify the port in your connection string, you can do so as follows: ```bash psql postgresql://[user]:[password]@[neon_hostname][:port]/[dbname] ``` ## Running queries After establishing a connection, try running the following queries: ```sql CREATE TABLE my_table AS SELECT now(); SELECT * FROM my_table; ``` The following result set is returned: ```sql SELECT 1 now ------------------------------- 2022-09-11 23:12:15.083565+00 (1 row) ``` ## Meta-commands The `psql` client supports a variety of meta-commands, which act like shortcuts for interacting with your database. ### Benefits of Meta-Commands Meta-commands can significantly speed up your workflow by providing quick access to database schemas and other critical information without needing to write full SQL queries. They are especially useful for database management tasks, making it easier to handle administrative duties directly from the Neon Console. ### Available meta-commands Here are some of the meta-commands that you can use with `psql`. **Note**: The Neon SQL Editor also supports meta-commands. See [Meta commands in the Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor#meta-commands). ```bash Informational (options: S = show system objects, + = additional detail) \d[S+] list tables, views, and sequences \d[S+] NAME describe table, view, sequence, or index \da[S] [PATTERN] list aggregates \dA[+] [PATTERN] list access methods \dAc[+] [AMPTRN [TYPEPTRN]] list operator classes \dAf[+] [AMPTRN [TYPEPTRN]] list operator families \dAo[+] [AMPTRN [OPFPTRN]] list operators of operator families \dAp[+] [AMPTRN [OPFPTRN]] list support functions of operator families \db[+] [PATTERN] list tablespaces \dc[S+] [PATTERN] list conversions \dconfig[+] [PATTERN] list configuration parameters \dC[+] [PATTERN] list casts \dd[S] [PATTERN] show object descriptions not displayed elsewhere \dD[S+] [PATTERN] list domains \ddp [PATTERN] list default privileges \dE[S+] [PATTERN] list foreign tables \des[+] [PATTERN] list foreign servers \det[+] [PATTERN] list foreign tables \deu[+] [PATTERN] list user mappings \dew[+] [PATTERN] list foreign-data wrappers \df[anptw][S+] [FUNCPTRN [TYPEPTRN ...]] list [only agg/normal/procedure/trigger/window] functions \dF[+] [PATTERN] list text search configurations \dFd[+] [PATTERN] list text search dictionaries \dFp[+] [PATTERN] list text search parsers \dFt[+] [PATTERN] list text search templates \dg[S+] [PATTERN] list roles \di[S+] [PATTERN] list indexes \dl[+] list large objects, same as \lo_list \dL[S+] [PATTERN] list procedural languages \dm[S+] [PATTERN] list materialized views \dn[S+] [PATTERN] list schemas \do[S+] [OPPTRN [TYPEPTRN [TYPEPTRN]]] list operators \dO[S+] [PATTERN] list collations \dp[S] [PATTERN] list table, view, and sequence access privileges \dP[itn+] [PATTERN] list [only index/table] partitioned relations [n=nested] \drds [ROLEPTRN [DBPTRN]] list per-database role settings \drg[S] [PATTERN] list role grants \dRp[+] [PATTERN] list replication publications \dRs[+] [PATTERN] list replication subscriptions \ds[S+] [PATTERN] list sequences \dt[S+] [PATTERN] list tables \dT[S+] [PATTERN] list data types \du[S+] [PATTERN] list roles \dv[S+] [PATTERN] list views \dx[+] [PATTERN] list extensions \dX [PATTERN] list extended statistics \dy[+] [PATTERN] list event triggers \l[+] [PATTERN] list databases \lo_list[+] list large objects \sf[+] FUNCNAME show a function's definition \sv[+] VIEWNAME show a view's definition \z[S] [PATTERN] same as \dp ``` For more information about meta-commands, see [psql Meta-Commands](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMANDS). ## Running psql from the Neon CLI If you have `psql` and the [Neon CLI](https://neon.com/docs/reference/neon-cli) installed, you can run `psql` commands directly from the Neon CLI using the `connection-string` command with the `--psql` option. ```bash neon connection-string --psql -- -c "SELECT version()" ``` For more examples, see [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string). --- # Source: https://neon.com/llms/data-api-custom-authentication-providers.txt # Custom Authentication Providers > The "Custom Authentication Providers" documentation outlines how Neon users can integrate custom authentication mechanisms with the Neon Data API, detailing configuration steps and supported authentication flows. ## Source - [Custom Authentication Providers HTML](https://neon.com/docs/data-api/custom-authentication-providers): The original HTML version of this documentation Related docs: - [Getting started with Data API](https://neon.com/docs/data-api/get-started) The Data API works with any authentication provider that issues [JSON Web Tokens (JWTs)](https://jwt.io/introduction). While [Neon Auth](https://neon.com/docs/guides/neon-auth) provides the simplest setup, you can use existing authentication infrastructure with providers like Auth0, Clerk, AWS Cognito, and others. ## How it works When you bring your own authentication provider, the JWT validation flow works as follows: ``` ┌─────────────┐ ┌──────────────────┐ ┌─────────────┐ │ Client │ │ Your Auth │ │ Neon Data │ │ Application │ │ Provider │ │ API │ └──────┬──────┘ └────────┬─────────┘ └──────┬──────┘ │ │ │ │ 1. Authenticate │ │ │────────────────────────>│ │ │ │ │ │ 2. Return JWT token │ │ │<────────────────────────│ │ │ │ │ │ 3. API request with │ │ │ Authorization header │ │ │────────────────────────────────────────────────────>│ │ │ │ │ │ 4. Fetch JWKS keys │ │ │<─────────────────────────│ │ │ │ │ │ 5. Return public keys │ │ │──────────────────────────>│ │ │ │ │ │ 6. Validate JWT │ │ │ 7. Extract user_id │ │ │ 8. Apply RLS policies │ │ │ │ │ 9. Return filtered data │ │ │<────────────────────────────────────────────────────│ │ │ │ ``` The key steps: 1. Your auth provider issues [JSON Web Tokens (JWTs)](https://jwt.io/introduction) to authenticated users. 2. Your application passes these JWTs to the Data API in the `Authorization` header. 3. Neon validates the tokens using your provider's [JWKS (JSON Web Key Set)](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets) URL. 4. The Data API enforces [Row-Level Security policies](https://neon.com/docs/guides/neon-rls) based on the user identity in the JWT. ## Add your authentication provider You can configure your authentication provider when you first enable the Data API, or add it later from the **Configuration** tab. Select **Other Provider** from the dropdown and enter: - Your provider's **JWKS URL** (see provider-specific instructions below). - Your **JWT Audience** value, if required by your provider. ## Find your JWKS URL | Provider | JWKS URL Format | | ----------------------------------- | ------------------------------------------------------------------------------------------- | | [Auth0](https://neon.com/docs/data-api/custom-authentication-providers#auth0) | `https://{YOUR_AUTH0_DOMAIN}/.well-known/jwks.json` | | [Clerk](https://neon.com/docs/data-api/custom-authentication-providers#clerk) | `https://{YOUR_CLERK_DOMAIN}/.well-known/jwks.json` | | [AWS Cognito](https://neon.com/docs/data-api/custom-authentication-providers#aws-cognito) | `https://cognito-idp.{REGION}.amazonaws.com/{USER_POOL_ID}/.well-known/jwks.json` | | [Azure AD](https://neon.com/docs/data-api/custom-authentication-providers#azure-ad) | `https://login.microsoftonline.com/{TENANT_ID}/discovery/v2.0/keys` | | [Descope](https://neon.com/docs/data-api/custom-authentication-providers#descope) | `https://api.descope.com/{PROJECT_ID}/.well-known/jwks.json` | | [Firebase/GCP](https://neon.com/docs/data-api/custom-authentication-providers#firebasegcp) | `https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com` | | [Google Identity](https://neon.com/docs/data-api/custom-authentication-providers#google-identity) | `https://www.googleapis.com/oauth2/v3/certs` | | [Keycloak](https://neon.com/docs/data-api/custom-authentication-providers#keycloak) | `https://{DOMAIN}/auth/realms/{REALM}/protocol/openid-connect/certs` | | [PropelAuth](https://neon.com/docs/data-api/custom-authentication-providers#propelauth) | `https://{YOUR_PROPEL_AUTH_URL}/.well-known/jwks.json` | | [Stack Auth](https://neon.com/docs/data-api/custom-authentication-providers#stack-auth) | `https://api.stack-auth.com/api/v1/projects/{PROJECT_ID}/.well-known/jwks.json` | | [Stytch](https://neon.com/docs/data-api/custom-authentication-providers#stytch) | `https://test.stytch.com/v1/sessions/jwks/{PROJECT_ID}` | | [SuperTokens](https://neon.com/docs/data-api/custom-authentication-providers#supertokens) | `{CORE_CONNECTION_URI}/.well-known/jwks.json` | | [WorkOS](https://neon.com/docs/data-api/custom-authentication-providers#workos) | `https://api.workos.com/sso/jwks/{CLIENT_ID}` | ### Auth0 Your Auth0 JWKS URL follows this format: ```bash https://{YOUR_AUTH0_DOMAIN}/.well-known/jwks.json ``` To find your domain: 1. Open the **Settings** for your application in the Auth0 dashboard 2. Copy your **Domain** value 3. Use it to form your JWKS URL For example, if your domain is `dev-abc123.us.auth0.com`, your JWKS URL would be: ```bash https://dev-abc123.us.auth0.com/.well-known/jwks.json ``` ### Clerk Your Clerk JWKS URL follows this format: ```bash https://{YOUR_CLERK_DOMAIN}/.well-known/jwks.json ``` To find your JWKS URL: 1. Go to the Clerk Dashboard 2. Navigate to **Configure → Developers → API Keys** 3. Click **Show JWT Public Key** 4. Copy the JWKS URL For advanced JWT configuration (custom claims, token lifespans), you can use the dedicated Neon template in Clerk under **Configure > JWT Templates**. ### AWS Cognito Your AWS Cognito JWKS URL follows this format: ```bash https://cognito-idp.{YOUR_AWS_COGNITO_REGION}.amazonaws.com/{YOUR_AWS_COGNITO_USER_POOL_ID}/.well-known/jwks.json ``` To find your JWKS URL: 1. Open the AWS Cognito console 2. Navigate to **User pools** 3. Select your user pool 4. Find the **Token signing key URL** (this is your JWKS URL) For example, if your region is `us-east-1` and your user pool ID is `us-east-1_XXXXXXXXX`, your JWKS URL would be: ```bash https://cognito-idp.us-east-1.amazonaws.com/us-east-1_XXXXXXXXX/.well-known/jwks.json ``` ### Azure AD Your Azure Active Directory JWKS URL follows this format: ```bash https://login.microsoftonline.com/{YOUR_TENANT_ID}/discovery/v2.0/keys ``` Replace `{YOUR_TENANT_ID}` with your Azure Active Directory tenant ID. For example, if your tenant ID is `12345678-1234-1234-1234-1234567890ab`, your JWKS URL would be: ```bash https://login.microsoftonline.com/12345678-1234-1234-1234-1234567890ab/discovery/v2.0/keys ``` **Note**: Depending on your Azure AD configuration, you may need to provide a JWT Audience value. ### Descope Your Descope JWKS URL follows this format: ```bash https://api.descope.com/{YOUR_DESCOPE_PROJECT_ID}/.well-known/jwks.json ``` To find your Project ID: 1. Go to the Descope Dashboard 2. Navigate to **Project Settings** 3. Copy your **Project ID** For example, if your Project ID is `P1234`, your JWKS URL would be: ```bash https://api.descope.com/P1234/.well-known/jwks.json ``` ### Firebase/GCP Firebase and Google Cloud Identity Platform share the same authentication infrastructure and use a common JWKS URL for all projects: ```bash https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com ``` You must also provide your Firebase/GCP Project ID as the JWT Audience value. To find your Project ID: 1. Go to the [Firebase Console](https://console.firebase.google.com) 2. Navigate to **Settings** > **General** 3. Copy your **Project ID** Enter your Project ID in the **JWT Audience** field when configuring the Data API. **Note**: Every GCP Identity Platform project automatically creates a corresponding Firebase project, which is why we use the Firebase Console to get the Project ID. ### Google Identity Your Google Identity JWKS URL is: ```bash https://www.googleapis.com/oauth2/v3/certs ``` You must also provide your OAuth 2.0 Client ID as the JWT Audience value. You can find your Client ID in the [Google Cloud Console](https://console.cloud.google.com/apis/credentials) under **APIs & Services > Credentials**. ### Keycloak Your Keycloak JWKS URL follows this format: ```bash https://{YOUR_KEYCLOAK_DOMAIN}/auth/realms/{YOUR_REALM}/protocol/openid-connect/certs ``` Replace: - `{YOUR_KEYCLOAK_DOMAIN}` with your Keycloak domain - `{YOUR_REALM}` with your Keycloak realm name **Note**: To ensure compatibility with the Data API, configure Keycloak to use only one signing algorithm (RS256 or ES256). You can verify this by opening the JWKS URL and checking the keys manually. Depending on your Keycloak configuration, you may also need to provide a JWT Audience value. ### PropelAuth Your PropelAuth JWKS URL follows this format: ```bash https://{YOUR_PROPEL_AUTH_URL}/.well-known/jwks.json ``` To find your PropelAuth URL: 1. Go to your PropelAuth dashboard 2. Navigate to **Backend Integration** in your project settings 3. Copy your Auth URL For example, if your PropelAuth URL is `https://3211758.propelauthtest.com`, your JWKS URL would be: ```bash https://3211758.propelauthtest.com/.well-known/jwks.json ``` ### Stack Auth Your Stack Auth JWKS URL follows this format: ```bash https://api.stack-auth.com/api/v1/projects/{YOUR_PROJECT_ID}/.well-known/jwks.json ``` Replace `{YOUR_PROJECT_ID}` with your Stack Auth project ID. For example, if your project ID is `my-awesome-project`, your JWKS URL would be: ```bash https://api.stack-auth.com/api/v1/projects/my-awesome-project/.well-known/jwks.json ``` ### Stytch Your Stytch JWKS URL follows this format: ```bash https://test.stytch.com/v1/sessions/jwks/{YOUR_PROJECT_ID} ``` Replace `{YOUR_PROJECT_ID}` with your Stytch project ID. For example, if your project ID is `my-awesome-project`, your JWKS URL would be: ```bash https://test.stytch.com/v1/sessions/jwks/my-awesome-project ``` **Note**: For production environments, replace `test.stytch.com` with `live.stytch.com`. ### SuperTokens Your SuperTokens JWKS URL follows this format: ```bash {YOUR_SUPER_TOKENS_CORE_CONNECTION_URI}/.well-known/jwks.json ``` To find your Core connection URI: 1. Go to the SuperTokens Dashboard 2. Navigate to **Core Management** 3. Copy your Core connection URI For example, if your connection URI is `https://try.supertokens.io`, your JWKS URL would be: ```bash https://try.supertokens.io/.well-known/jwks.json ``` ### WorkOS Your WorkOS JWKS URL follows this format: ```bash https://api.workos.com/sso/jwks/{YOUR_CLIENT_ID} ``` To find your Client ID: 1. Go to the WorkOS Dashboard 2. Navigate to the **Overview** page 3. Copy your **Client ID** For example, if your Client ID is `client_12345`, your JWKS URL would be: ```bash https://api.workos.com/sso/jwks/client_12345 ``` ## Next steps After configuring your authentication provider, include the JWT in your Data API requests: ```http GET https://your-project.data.neon.tech/v1/posts Authorization: Bearer {your_jwt_token} ``` Then set up [Row-Level Security policies](https://neon.com/docs/data-api/get-started#create-a-table-with-rls) to control data access using the `auth.user_id()` function, which extracts the user ID from your JWT. --- # Source: https://neon.com/llms/data-api-demo.txt # Neon Data API tutorial > The Neon Data API tutorial guides users through setting up and using Neon's Data API, detailing steps for connecting to a Neon database and executing SQL queries programmatically. ## Source - [Neon Data API tutorial HTML](https://neon.com/docs/data-api/demo): The original HTML version of this documentation In this tutorial, we'll walk through our note-taking app to show how you can use Neon's Data API with the `postgrest-js` client library to write queries from your frontend code, with proper authentication and Row-Level Security (RLS) policies ensuring your data stays secure. The Data API is compatible with PostgREST, so you can use any PostgREST client library. ## About the sample application This note-taking app is built with React and Vite. It uses Neon Auth for authentication, the Data API for direct database access, and Drizzle ORM for handling the schema. > If you want to see this notes app in action without installing it yourself, check out this live preview: > [Neon Data API Notes App](https://neon-data-api-neon-auth.vercel.app/) ## Prerequisites To follow this tutorial, you'll need to: 1. [Create a Neon project](https://pg.new) and [enable the Data API](https://neon.com/docs/data-api/get-started#enabling-the-data-api), noting the **Data API URL**. 2. Clone and set up the [demo](https://github.com/neondatabase-labs/neon-data-api-neon-auth): ```bash git clone https://github.com/neondatabase-labs/neon-data-api-neon-auth ``` Follow the README, adding your **Data API URL** to the `.env` file. ## Database Schema The app uses two main tables: `notes` and `paragraphs`. Here's how they're structured: ```typescript // src/db/schema.ts - Defines the database tables and their relationships // notes table { id: uuid("id").defaultRandom().primaryKey(), ownerId: text("owner_id") .notNull() .default(sql`auth.user_id()`), title: text("title").notNull().default("untitled note"), createdAt: timestamp("created_at", { withTimezone: true }).defaultNow(), updatedAt: timestamp("updated_at", { withTimezone: true }).defaultNow(), shared: boolean("shared").default(false), } // paragraphs table { id: uuid("id").defaultRandom().primaryKey(), noteId: uuid("note_id").references(() => notes.id), content: text("content").notNull(), createdAt: timestamp("created_at", { withTimezone: true }).defaultNow(), } ``` Each note belongs to a user (via `ownerId`), and paragraphs are linked to notes through `noteId`. ## Secure your tables with RLS Before we dive into the queries, let's first secure our tables. When making direct database queries from the frontend, **Row-Level Security (RLS) policies** are essential. They make sure that users can access **only their own data**. RLS is crucial for any real-world app. RLS policies act as a safety net at the database level, so even if your frontend code has bugs, your data stays protected. Our demo app uses [Drizzle ORM](https://neon.com/docs/guides/rls-drizzle) to define RLS policies, which we highly recommend as a simpler, more maintainable way of writing RLS policies: ```typescript // src/db/schema.ts - RLS policies using Drizzle crudPolicy({ role: authenticatedRole, read: authUid(table.ownerId), modify: authUid(table.ownerId), }), pgPolicy("shared_policy", { for: "select", to: authenticatedRole, using: sql`${table.shared} = true`, }), ``` These Drizzle policies generate the equivalent SQL policies for all CRUD operations (`SELECT`, `INSERT`, `UPDATE`, `DELETE`). For example: ```sql -- SELECT CREATE POLICY "crud-authenticated-policy-select" ON "notes" AS PERMISSIVE FOR SELECT TO "authenticated" USING ((select auth.user_id() = "notes"."owner_id")); -- DELETE (similar for INSERT and UPDATE) CREATE POLICY "crud-authenticated-policy-delete" ON "notes" AS PERMISSIVE FOR DELETE TO "authenticated" USING ((select auth.user_id() = "notes"."owner_id")); CREATE POLICY "shared_policy" ON "notes" AS PERMISSIVE FOR SELECT TO "authenticated" USING ("notes"."shared" = true); ``` The policies ensure: 1. Users can only access their own notes (`SELECT`, `INSERT`, `UPDATE`, `DELETE`) 2. Shared notes are visible to authenticated users 3. Data access is enforced at the database level The paragraphs table uses similar Drizzle policies that check ownership through the parent note: ```typescript // src/db/schema.ts - Paragraphs RLS policies crudPolicy({ role: authenticatedRole, read: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, modify: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, }), pgPolicy("shared_policy", { for: "select", to: authenticatedRole, using: sql`(select notes.shared from notes where notes.id = ${table.noteId})`, }), ``` **Info** About auth.user_id(): Neon's RLS policies use the auth.user_id() function, which extracts the user's ID from the JWT (JSON Web Token) provided by your authentication provider. In this demo, Neon Auth issues the JWTs, and Neon's Data API passes them to Postgres, so RLS can enforce per-user access. For more details on RLS with Data API, see our [Row-Level Security with Neon guide](https://neon.com/docs/guides/row-level-security). Now that our tables are secure, let's look at how to perform CRUD operations using our note-taking app as an example. ## INSERT Let's start with creating new notes. The demo app does it like this: ```typescript const { data, error } = await postgrest .from('notes') .insert({ title: generateNameNote() }) .select('id, title, shared, owner_id, paragraphs (id, content, created_at, note_id)') .single(); ``` When you create a new note, the app automatically generates a unique, codename-style label for the title using `generateNameNote()`. That's why you'll see names like "tender fuchsia" (as shown below) in your notes list. Here's what it looks like after creating a note and adding a couple paragraphs: See [src/routes/note.tsx](https://github.com/neondatabase-labs/neon-data-api-neon-auth/blob/main/src/routes/note.tsx) ## SELECT To display all notes for the current user, ordered by creation date, the app uses: ```typescript const { data, error } = await postgrest .from('notes') .select('id, title, created_at, owner_id, shared') .eq('owner_id', user.id) .order('created_at', { ascending: false }); ``` > `.eq('owner_id', user.id)` is a `postgrest-js` method that filters results, much like a SQL `WHERE` clause, to only include notes belonging to the current user. Here's what your notes list will look like after fetching all notes from the database. > **Hint:** To get back to your main notes list, click the **"note."** heading at the top of the app. See [src/routes/index.tsx](https://github.com/neondatabase-labs/neon-data-api-neon-auth/blob/main/src/routes/index.tsx) ## UPDATE You can rename any note by editing its title directly in the app (for example, changing "additional jade" to "water the plants"). When you do, the app updates the note in the database behind the scenes. Here's how the app updates a note's title using the UPDATE operation: ```typescript const { error } = await postgrest .from('notes') .update({ title: 'Updated Title' }) .eq('id', noteId); ``` > Tip: With postgrest-js, you can chain methods like `.from()`, `.update()`, and `.eq()` to build queries, like in the example above. Here's how a note looks after you update its title to something more meaningful in the UI: See [src/components/app/note-title.tsx](https://github.com/neondatabase-labs/neon-data-api-neon-auth/blob/main/src/components/app/note-title.tsx) Now let's look at a more advanced pattern you can use with postgrest-js. ## INSERT and fetch related data You may have noticed that our earlier `INSERT` example included `.select("*")`, chained after `.insert()`. This lets you insert a record and immediately fetch it back in a single query. This is a useful pattern provided by postgrest-js's chainable API (as mentioned above). And you can take it further: you can also fetch **related data** (from other tables linked by foreign keys) at the same time. For example, in our INSERT example from earlier, we immediately fetch the new note **and** any related paragraphs (if they exist): ```typescript const { data, error } = await postgrest .from('notes') .insert({ title: generateNameNote() }) .select('id, title, shared, owner_id, paragraphs (id, content, created_at, note_id)') .single(); ``` This is particularly useful when you need to: - Create a record and immediately show it in the UI - Ensure data consistency by getting the exact state from the database - Reduce the number of API calls needed See [src/routes/note.tsx](https://github.com/neondatabase-labs/neon-data-api-neon-auth/blob/main/src/routes/note.tsx) ## Adding delete functionality to the app If you've played with the app at all, you may also have noticed — there's no way to delete a note. This is the hands-on part of the tutorial. Let's go ahead and add delete functionality to your local version of the app. You'll see how to implement a complete DELETE operation with postgrest-js. ### Step 1: Add a delete button to your note card component First, update the `NoteCard` component to include a delete button: ```typescript import { Link } from "@tanstack/react-router"; import moment from "moment"; import { Trash2Icon } from "lucide-react"; export default function NoteCard({ id, title, createdAt, onDelete, }: { id: string; title: string; createdAt: string; onDelete?: () => void; }) { return (
{title}

{moment(createdAt).fromNow()}

{onDelete && ( )}
); } ``` > **Note:** Make sure to import the trash can icon: `import { Trash2Icon } from "lucide-react";` ### Step 2: Add the delete handler to your notes list Next, add the delete handler to your `NotesList` component: ```typescript // src/components/app/notes-list.tsx import { usePostgrest } from '@/lib/postgrest'; const handleDelete = async (id: string) => { const { error } = await postgrest.from('notes').delete().eq('id', id); if (!error) { window.location.reload(); } }; ``` > Make sure to import `usePostgrest` to get the postgrest client Then pass the delete handler to each `NoteCard`: ```typescript handleDelete(note.id)} /> ``` Your app now includes a delete trash can next to each note. Go ahead and delete a couple notes to try it out: > If you can't delete a note, it likely still has paragraphs attached. Postgres prevents deleting notes that have related paragraphs because of the foreign key relationship. ## Enable ON DELETE CASCADE To allow deleting a note and all its paragraphs in one go, you'll need to update your schema to use `ON DELETE CASCADE` on the `paragraphs.note_id` foreign key. You can do this in the Neon SQL editor: ```sql ALTER TABLE paragraphs DROP CONSTRAINT paragraphs_note_id_notes_id_fk, ADD CONSTRAINT paragraphs_note_id_notes_id_fk FOREIGN KEY (note_id) REFERENCES notes(id) ON DELETE CASCADE; ``` If you get an error about the constraint name, your database may use a different name for the foreign key. To find it, run: ```sql SELECT conname FROM pg_constraint WHERE conrelid = 'paragraphs'::regclass; ``` Then, use the name you find (e.g. `paragraphs_note_id_notes_id_fk`) in the `DROP CONSTRAINT` and `ADD CONSTRAINT` commands above. ## Learn more - [Getting started with Data API](https://neon.com/docs/data-api/get-started) - [Neon Auth documentation](https://neon.com/docs/guides/neon-auth) - [postgrest-js documentation](https://github.com/supabase/postgrest-js) - [PostgREST documentation](https://docs.postgrest.org/en/v13/) - [Simplify RLS with Drizzle](https://neon.com/docs/guides/rls-drizzle) --- # Source: https://neon.com/llms/data-api-get-started.txt # Getting started with Neon Data API > The "Getting started with Neon Data API" documentation guides users through the initial setup and usage of Neon's Data API, detailing steps for authentication, making requests, and handling responses within the Neon platform. ## Source - [Getting started with Neon Data API HTML](https://neon.com/docs/data-api/get-started): The original HTML version of this documentation Related docs: - [Neon Auth](https://neon.com/docs/guides/neon-auth) - [Building a note-taking app](https://neon.com/docs/data-api/demo) Demo app: - [Neon Data API demo note-taking app](https://github.com/neondatabase-labs/neon-data-api-neon-auth) The Neon Data API offers a ready-to-use REST API for your Neon database that's compatible with [PostgREST](https://docs.postgrest.org/en/v13/). You can interact with any table, view, or function using standard HTTP verbs (`GET`, `POST`, `PATCH`, `DELETE`). To simplify querying, use client libraries like [`postgrest-js`](https://github.com/supabase/supabase-js/tree/master/packages/core/postgrest-js), [`postgrest-py`](https://github.com/supabase/supabase-py/tree/main/src/postgrest), or [`postgrest-go`](https://github.com/supabase-community/postgrest-go): ```javascript const { data } = await client.from('posts').select('*'); ``` **Info** About RLS: When using the Data API, it is essential to set up RLS policies so that you can safely expose your databases to clients such as web apps. Make sure that all of your tables have RLS policies, and that you have carefully reviewed each policy. ## Enable the Data API Enable the Data API at the **branch** level for a single database. **Important**: Data API and [IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow) cannot be used together. To enable Data API, you must first disable IP Allow on your project. To get started, open the **Data API** page from the project sidebar and click **Enable**. Once enabled, you'll get: - A **REST API endpoint** for your branch - Neon Auth as your auth provider - Two Postgres roles: `authenticated` and `anonymous` (coming soon) - GRANT permissions applied to the authenticated role **Info** Custom authentication providers: We recommend Neon Auth with the Data API, but you can bring your own authentication provider (Auth0, Clerk, AWS Cognito, etc.) if you want. See [Custom Authentication Providers](https://neon.com/docs/data-api/custom-authentication-providers) for details. **Info** Having trouble enabling the Data API?: If you encounter a "permission denied to create extension" error when enabling the Data API, this usually means your database was created via direct SQL rather than the Console API. See our [troubleshooting guide](https://neon.com/docs/data-api/troubleshooting) for solutions. ## Secure your Data API The Data API requires two layers of security: 1. Database permissions (GRANT statements, already configured if you accepted the defaults) 2. Row-Level Security (RLS) policies ### Database permissions If you accepted the defaults during setup, Neon automatically applied the necessary GRANT statements. If you skipped that step, you'll need to run these SQL statements manually: ```sql -- For existing tables GRANT SELECT, UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA public TO authenticated; -- For future tables ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, UPDATE, INSERT, DELETE ON TABLES TO authenticated; -- For sequences (for identity columns) GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO authenticated; -- Schema usage GRANT USAGE ON SCHEMA public TO authenticated; ``` **Warning** Authentication required: **All requests to the Data API currently require authentication** with a valid JWT token. Anonymous access is not supported yet, but is coming soon. In the near future, we'll provide public/long-lived tokens for anonymous users. ### Create a table with RLS Here's a sample `posts` table secured with RLS. The GRANT statements above give authenticated users access to all tables, which allows the Data API to work. RLS policies then control which specific rows each user can see and modify. For guidance on writing RLS policies, see our [PostgreSQL RLS tutorial](https://neon.com/postgresql/postgresql-administration/postgresql-row-level-security) for the basics, or our recommended [Drizzle RLS guide](https://neon.com/docs/guides/rls-drizzle) for a simpler approach. Tab: SQL ```sql CREATE TABLE "posts" ( "id" bigint GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, "userId" text DEFAULT (auth.user_id()) NOT NULL, "content" text NOT NULL, "published" boolean DEFAULT false NOT NULL ); -- Enable RLS and create policies ALTER TABLE "posts" ENABLE ROW LEVEL SECURITY; -- When RLS is enabled, all operations are denied by default unless explicitly allowed by policies. CREATE POLICY "Allow authenticated users to read any post" ON "posts" AS PERMISSIVE FOR SELECT TO "authenticated" USING (true); CREATE POLICY "Allow authenticated users to insert their own posts" ON "posts" AS PERMISSIVE FOR INSERT TO "authenticated" WITH CHECK ((select auth.user_id() = "userId")); CREATE POLICY "Allow authenticated users to update their own posts" ON "posts" AS PERMISSIVE FOR UPDATE TO "authenticated" USING ((select auth.user_id() = "userId")) WITH CHECK ((select auth.user_id() = "userId")); CREATE POLICY "Allow authenticated users to delete their own posts" ON "posts" AS PERMISSIVE FOR DELETE TO "authenticated" USING ((select auth.user_id() = "userId")); ``` Tab: Drizzle (crudPolicy) ```typescript import { sql } from 'drizzle-orm'; import { crudPolicy, authenticatedRole, authUid } from 'drizzle-orm/neon'; import { bigint, boolean, pgTable, text } from 'drizzle-orm/pg-core'; export const posts = pgTable( 'posts', { id: bigint({ mode: 'number' }).primaryKey(), userId: text() .notNull() .default(sql`(auth.user_id())`), content: text().notNull(), published: boolean().notNull().default(false), }, (table) => [ // Policy for authenticated users crudPolicy({ role: authenticatedRole, read: true, // Can also read all posts modify: authUid(table.userId), // Can only modify their own posts }), ] ); ``` Tab: Drizzle (pgPolicy) ```typescript import { sql } from 'drizzle-orm'; import { authenticatedRole, authUid } from 'drizzle-orm/neon'; import { bigint, boolean, pgPolicy, pgTable, text } from 'drizzle-orm/pg-core'; export const posts = pgTable( 'posts', { id: bigint({ mode: 'number' }).primaryKey(), userId: text() .notNull() .default(sql`(auth.user_id())`), content: text().notNull(), published: boolean().notNull().default(false), }, (table) => [ // Authenticated users pgPolicy('Allow authenticated users to read any post', { to: authenticatedRole, for: 'select', using: sql`true`, }), pgPolicy('Allow authenticated users to insert their own posts', { to: authenticatedRole, for: 'insert', withCheck: authUid(table.userId), }), pgPolicy('Allow authenticated users to update their own posts', { to: authenticatedRole, for: 'update', using: authUid(table.userId), withCheck: authUid(table.userId), }), pgPolicy('Allow authenticated users to delete their own posts', { to: authenticatedRole, for: 'delete', using: authUid(table.userId), }), ] ); ``` The `auth.user_id()` function is provided by the Data API and extracts the user ID from JWT tokens, making it available to your RLS policies for enforcing per-user access control. With the `posts` table and its RLS policies in place, you can now securely query and modify posts using the Data API. ## Query from your app The Neon Auth SDK (Stack Auth) manages JWT tokens automatically. Here's an example showing how to use it with `postgrest-js`: ```ts import { PostgrestClient } from '@supabase/postgrest-js'; import { useUser } from '@stackframe/stack'; // Example: fetch notes for the current user async function fetchUserNotes() { const user = useUser(); if (!user) return null; const { accessToken } = await user.getAuthJson(); const pg = new PostgrestClient(import.meta.env.VITE_DATA_API_URL, { headers: { Authorization: `Bearer ${accessToken}` }, }); const { data, error } = await pg .from('notes') .select('id, title, created_at, owner_id, shared') .eq('owner_id', user.id) .order('created_at', { ascending: false }); return { data, error }; } ``` This example shows the key steps: 1. Get the current user with `useUser()` 2. Extract their JWT token with `user.getAuthJson()` 3. Create a PostgrestClient with proper authentication headers 4. Query the Data API with filtering (`.eq('owner_id', user.id)`) and ordering (`.order('created_at', { ascending: false })`) To see a complete, working example of an application built with the Data API, Neon Auth, and Postgres RLS, check out our demo note-taking app: - [Full tutorial](https://neon.com/docs/data-api/demo) - Step-by-step guide to building the app - [GitHub Repository](https://github.com/neondatabase-labs/neon-data-api-neon-auth) - [Live Demo](https://neon-data-api-neon-auth.vercel.app/) ## Refresh schema cache When you modify your database schema (adding tables, columns, or changing structure), the Data API needs to refresh its cache. After making any schema changes, go to the **Data API** section in the Console and click **Refresh schema cache**; the API will now reflect your latest schema. --- # Source: https://neon.com/llms/data-api-sql-to-rest.txt # SQL to PostgREST Converter > The SQL to PostgREST Converter documentation guides Neon users on converting SQL queries into RESTful API endpoints using PostgREST, facilitating seamless integration with Neon's data API. ## Source - [SQL to PostgREST Converter HTML](https://neon.com/docs/data-api/sql-to-rest): The original HTML version of this documentation Related docs: - [Getting started with Neon Data API](https://neon.com/docs/data-api/get-started) - [Building a note-taking app](https://neon.com/docs/data-api/demo) Enter your SQL query below and see the equivalent PostgREST API calls in real-time. This tool supports common SELECT statements with filtering, sorting, pagination, joins, and aggregations. ## About PostgREST PostgREST automatically generates a RESTful API from your PostgreSQL schema. It supports filtering, sorting, pagination, joins, and many other SQL features through URL parameters. Learn more about PostgREST at [postgrest.org](https://postgrest.org/) --- # Source: https://neon.com/llms/data-api-troubleshooting.txt # Data API troubleshooting > The "Data API troubleshooting" document offers solutions for common issues encountered when using Neon's Data API, detailing error messages and corrective actions specific to Neon's platform. ## Source - [Data API troubleshooting HTML](https://neon.com/docs/data-api/troubleshooting): The original HTML version of this documentation ## Permission denied to create extension "pg_session_jwt" ```bash Request failed: database CREATE permission is required for neon_superuser ``` ### Why this happens You created your database with a direct SQL query (`CREATE DATABASE foo;`) instead of using the Console UI or Neon API. The Data API requires specific database permissions that aren't automatically granted when you create databases this way. ### Fix Grant `neon_superuser` permissions to the database you want to enable the Data API for. ```sql GRANT ALL PRIVILEGES ON DATABASE your_database_name TO neon_superuser; ``` For future databases, create them using the Console UI or Neon API instead of direct SQL. Neon automatically sets up the required permissions when you use these methods. **Example** ```bash curl -X POST "https://console.neon.tech/api/v2/projects/${projectId}/branches/${branchId}/databases" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $NEON_API_KEY" \ -d '{ "database": { "name": "your_database_name" } }' ``` --- # Source: https://neon.com/llms/data-types-array.txt # Postgres Array data type > The document explains the Postgres Array data type in the context of Neon, detailing how to define, manipulate, and query arrays within a PostgreSQL database managed by Neon. ## Source - [Postgres Array data type HTML](https://neon.com/docs/data-types/array): The original HTML version of this documentation In Postgres, the `ARRAY` data type is used to store and manipulate collections of elements in a single column. An array can have variable length and one or more dimensions, but must hold elements of the same data type. Postgres provides a variety of functions and operators for working with arrays. Arrays are particularly useful when dealing with multiple values that are logically related. For instance, they can store a list of phone numbers for a contact, product categories for an e-commerce item, or even multi-dimensional data for scientific or analytical computations. ## Storage and syntax Arrays in Postgres are declared by specifying the element type followed by square brackets. For example, - `INTEGER[]` defines an array of integers. - `TEXT[][]` defines a two-dimensional array of text values. - `NUMERIC[3]` defines an array of three numeric values. However, note that Postgres doesn't enforce the specified size of an array. Array literals in Postgres are written within curly braces `{}` and separated by commas. For instance, - An array of integers might look like `{1, 2, 3}`. - Multidimensional arrays use nested curly braces, like `{{1, 2, 3}, {4, 5, 6}}`. The `ARRAY` constructor syntax can also be used to create arrays. For example, - `ARRAY[1, 2, 3]` creates an array of integers. - `ARRAY[[1, 2, 3], [4, 5, 6]]` creates a two-dimensional array. ## Example usage Consider the case of maintaining a product catalog for an online store. The same product may belong to multiple categories. For example, an iPad could be tagged as 'Electronics', 'Computer', or 'Mobile'. In this case, we can use an array to store the categories for each product. First, let's create a `products` table with some sample data: ```sql CREATE TABLE products ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, categories TEXT[], units_sold INTEGER[][] ); INSERT INTO products (name, categories, units_sold) VALUES ('Laptop', '{"Electronics","Computer","Office"}', '{{3200, 3300, 3400, 3500}, {3600, 3700, 3800, 3900}}'), ('Headphones', '{"Electronics","Audio"}', '{{2400, 2500, 2600, 2700}, {2800, 2900, 3000, 3100}}'), ('Table', '{"Furniture","Office"}', '{{900, 950, 1000, 1050}, {1100, 1150, 1200, 1250}}'), ('Keyboard', '{"Electronics","Accessories"}', '{{4100, 4200, 4300, 4400}, {4500, 4600, 4700, 4800}}'); ``` The `units_sold` column is a two-dimensional array that stores the number of units sold for each product. The first dimension represents the year, and the second dimension represents the quarter. Now, we can access the values in the array column `categories`, and use it in our queries. For example, the query below finds products belonging to the `Electronics` category. ```sql SELECT name, categories FROM products WHERE 'Electronics' = ANY (categories); ``` Note that the `ANY` operator checks if the value specified exists in the array. This query returns the following result: ```text | id | name | categories | |----|------------|---------------------------------| | 1 | Laptop | {Electronics, Computer, Office} | | 2 | Headphones | {Electronics, Audio} | | 4 | Keyboard | {Electronics, Accessories} | ``` ## Other examples ### Indexing arrays Elements in an array can be accessed by their index. Postgres arrays are 1-based, meaning indexing starts at 1. For example, to get the first category of each product: ```sql SELECT name, categories[1] AS first_category FROM products; ``` This query returns the following result: ```text | name | first_category | |------------|----------------| | Laptop | Electronics | | Headphones | Electronics | | Table | Furniture | | Keyboard | Electronics | ``` Multiple elements can be accessed using the `SLICE` operator. For example, to get the first three categories of each product: ```sql SELECT name, categories[1:3] AS first_three_categories FROM products; ``` This query returns the following result: ```text | name | first_three_categories | |------------|---------------------------------| | Laptop | {Electronics, Computer, Office} | | Headphones | {Electronics, Audio} | | Table | {Furniture, Office} | | Keyboard | {Electronics, Accessories} | ``` Multidimensional arrays can be accessed using multiple indices. For example, to get the number of units sold in the last quarter of the first year for each product, we can use the query: ```sql SELECT name, units_sold[1][4] AS units_sold_last_quarter FROM products; ``` This query returns the following: ```text | name | units_sold_last_quarter | |------------|-------------------------| | Laptop | 3500 | | Headphones | 2700 | | Table | 1050 | | Keyboard | 4400 | ``` ### Modifying arrays Array values can be modified using functions or by directly indexing into the array. You can change specific elements of an array, add or remove elements, or even replace the entire array. For example, the query below replaces the `Audio` category across all products with `Sound`. ```sql UPDATE products SET categories = array_replace(categories, 'Audio', 'Sound') WHERE 'Audio' = ANY (categories) RETURNING *; ``` This query returns the following result: ```text | id | name | categories | units_sold | |----|------------|-----------------------|----------------------------------------------| | 2 | Headphones | {Electronics,Sound} | {{2400,2500,2600,2700},{2800,2900,3000,3100}} | ``` ### Array functions and operators Postgres provides a variety of functions and operators for working with arrays. You can find the full list of functions and operators in the [Postgres documentation](https://neon.com/docs/data-types/array#resources). We'll look at some commonly used functions below. **Length of an array** We can query the number of categories each product has been tagged with: ```sql SELECT name, array_length(categories, 1) as category_count FROM products; ``` This query returns the following result: ```text | name | category_count | |------------|----------------| | Laptop | 3 | | Headphones | 3 | | Table | 2 | | Keyboard | 2 | ``` The `array_length` function returns the length of the array in the specified dimension. In this case, we specified the first dimension, which is the number of categories for each product. **Expanding an array into rows** We can use the `unnest` function to expand an array into rows. For example, to get the number of laptops sold in each quarter, we can use the query: ```sql SELECT name, unnest(units_sold) AS units_sold FROM products WHERE name = 'Laptop'; ``` This query returns the following result: ```text | name | units_sold | |--------|------------| | Laptop | 3200 | | Laptop | 3300 | | Laptop | 3400 | | Laptop | 3500 | | Laptop | 3600 | | Laptop | 3700 | | Laptop | 3800 | | Laptop | 3900 | ``` We could use the output of `unnest` to calculate the total number of units sold for each product; for example: ```sql WITH table_units AS ( SELECT name, unnest(units_sold) AS total_units_sold FROM products ) SELECT name, sum(total_units_sold) FROM table_units GROUP BY name; ``` This query returns the following result: ```text | name | sum | |------------|-------| | Keyboard | 35600 | | Table | 8600 | | Laptop | 28400 | | Headphones | 22000 | ``` **Concatenating arrays** We can concatenate two arrays using the `||` operator. For example, the query below produces a list of all categories across all products. ```sql SELECT ARRAY[1,2,3] || ARRAY[4,5] as concatenated_array; ``` This query returns the following result: ```text | concatenated_array | |--------------------| | {1,2,3,4,5} | ``` **Aggregating values into an array** We can use the `array_agg` function to produce an array from a set of rows. For example, to get a list of all products that are in the `Electronics` category, we can use the query: ```sql SELECT array_agg(name) AS product_names FROM products WHERE 'Electronics' = ANY (categories); ``` This query returns the following result: ```text | product_names | |------------------------------| | {Laptop,Headphones,Keyboard} | ``` ## Additional considerations - **Performance and UX**: While arrays provide flexibility, they can be less performant than normalized data structures for large datasets. Compared to a set of rows, arrays can also be more tedious to work with for complex queries. - **Indexing**: Postgres lets you create indexes on array elements for faster searches. Specifically, an inverted index like `GIN` creates an entry for each element in the array. This allows for fast lookups but can be expensive to maintain for large arrays. - **No type enforcement**: Postgres supports defining the size of an array or the number of dimensions in the schema. However, Postgres does not enforce these definitions. For example, the query below works successfully: ```sql CREATE TABLE test_size ( id SERIAL PRIMARY KEY, arr1 INTEGER[3] ); INSERT INTO test_size (arr1) VALUES (ARRAY[1,2,3]), (ARRAY[1,2]); ``` It is therefore up to the application to ensure data integrity. ## Resources - [PostgreSQL documentation - Array Types](https://www.postgresql.org/docs/current/arrays.html) - [PostgreSQL documentation - Array Functions](https://www.postgresql.org/docs/current/functions-array.html) --- # Source: https://neon.com/llms/data-types-boolean.txt # Postgres Boolean data type > The document explains the Postgres Boolean data type, detailing its usage, syntax, and behavior within the Neon database environment. ## Source - [Postgres Boolean data type HTML](https://neon.com/docs/data-types/boolean): The original HTML version of this documentation In Postgres, the Boolean datatype is designed to store truth values. A Boolean column can hold one of three states: `true`, `false`, or `NULL` representing unknown or missing values. For instance, Boolean values can be used in a dataset to represent the status of an order, whether a user is active, or whether a product is in stock. A Boolean value could also be produced as a result of comparisons or logical operations. ## Storage and syntax In SQL statements, Boolean values are represented by the keywords `TRUE`, `FALSE`, and `NULL`. Postgres is flexible and allows for various textual representations of these values: - `TRUE` can also be represented as `t`, `true`, `y`, `yes`, `on`, `1`. - `FALSE` can also be represented as `f`, `false`, `n`, `no`, `off`, `0`. A boolean value is stored as a single byte. ## Example usage Consider a table of users for a web application. We can add a Boolean column to represent whether a user is active or not. The query below creates a `users` table and inserts some sample data: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, username TEXT NOT NULL, is_active BOOLEAN, has_paid_subscription BOOLEAN ); INSERT INTO users (username, is_active, has_paid_subscription) VALUES ('alice', TRUE, TRUE), ('bob', TRUE, FALSE), ('charlie', FALSE, TRUE), ('david', NULL, NULL), ('eve', FALSE, FALSE); ``` Say we want to find all the users currently active on the website. The `WHERE` clause accepts a Boolean expression, so we can filter down to the rows where the `is_active` column is `TRUE`. ```sql SELECT * FROM users WHERE is_active = TRUE; ``` This query returns the following: ```text | id | username | is_active | has_paid_subscription | |----|----------|-----------|-----------------------| | 1 | alice | t | t | | 2 | bob | t | f | ``` ## Other examples ### Conditional logic Boolean data types are commonly used in conditional statements like `WHERE`, `IF`, and `CASE`. For example, the `CASE` statement is a control flow structure that allows you to perform `IF-THEN-ELSE` logic in SQL. In the query below, we categorize users based on their activity and account type. ```sql SELECT username, CASE WHEN is_active = TRUE AND has_paid_subscription = TRUE THEN 'Active Paid' WHEN is_active = TRUE AND has_paid_subscription = FALSE THEN 'Active Free' WHEN is_active = FALSE AND has_paid_subscription = TRUE THEN 'Inactive Paid' WHEN is_active = FALSE AND has_paid_subscription = FALSE THEN 'Inactive Free' ELSE 'Unknown' END AS user_status FROM users; ``` This query returns the following: ```text | username | user_status | |----------|---------------| | alice | Active Paid | | bob | Active Free | | charlie | Inactive Paid | | david | Unknown | | eve | Inactive Free | ``` ### Boolean expressions Boolean expressions combine multiple boolean values using operators like `AND`, `OR`, and `NOT`. These expressions return boolean values and are crucial in complex SQL queries. For example, we can use a Boolean expression to find all the users who are active but don't have a paid subscription yet. ```sql SELECT id, username FROM users WHERE is_active = TRUE AND has_paid_subscription = FALSE; ``` This query returns the following: ```text | id | username | |----|----------| | 2 | bob | ``` ### Boolean aggregations Postgres also supports aggregating over a set of Boolean values, using functions like `bool_and()` and `bool_or()`. For example, we can query to check that no inactive users have a paid subscription. ```sql SELECT bool_or(has_paid_subscription) AS inactive_paid_users FROM users WHERE is_active = FALSE; ``` This query returns the following: ```text | inactive_paid_users | |---------------------| | t | ``` This indicates there is at least one inactive user with an ongoing subscription. We should probably email them a reminder to log in. ### Boolean in join conditions Booleans can be effectively used in the `JOIN` clause to match rows across tables. In the query below, we join the `users` table with the table containing contact information to send a promotional email to all active users. ```sql WITH contacts (user_id, email) AS ( VALUES (1, 'alice@email.com'), (2, 'bob@email.com'), (3, 'charlie@email.com'), (4, 'david@email.com'), (5, 'eve@email.com') ) SELECT u.id, u.username, c.email FROM users u JOIN contacts c ON u.id = c.user_id AND u.is_active = TRUE; ``` This query returns the following: ```text | id | username | email | |----|----------|-----------------| | 1 | alice | alice@email.com | | 2 | bob | bob@email.com | ``` ## Additional considerations - **NULL**: `NULL` in boolean terms indicates an unknown state, which is neither `TRUE` nor `FALSE`. In conditional statements, `NULL` values will not equate to `FALSE`. - **Type Casting**: Be mindful when converting Booleans to other data types. For instance, casting a Boolean to an integer results in `1` for `TRUE` and `0` for `FALSE`. This behavior is useful in aggregations or mathematical operations. - **Indexing**: Using Booleans in indexing might not always be efficient, especially if the distribution of true and false values is uneven. ## Resources - [PostgreSQL Boolean Type documentation](https://www.postgresql.org/docs/current/datatype-boolean.html) --- # Source: https://neon.com/llms/data-types-character.txt # Postgres Character data types > The document details the character data types available in PostgreSQL, explaining their usage and constraints, specifically for Neon users managing text data within their databases. ## Source - [Postgres Character data types HTML](https://neon.com/docs/data-types/character): The original HTML version of this documentation In Postgres, character data types are used to store strings. There are three primary character types: `CHAR(n)`, `VARCHAR(n)`, and `TEXT`. `CHAR(n)` and `VARCHAR(n)` types are suitable for strings with known or limited length; for example, usernames and email addresses. Whereas `TEXT` is ideal for storing large variable-length strings, such as blog posts or product descriptions. ## Storage and syntax - `VARCHAR(n)` allows storing any string up to `n` characters. - `CHAR(n)` stores strings in a fixed length. If a string is shorter than `n`, it is padded with spaces. - `TEXT` has no length limit, making it ideal for large texts. Storing strings requires one or a few bytes of overhead over the actual string length. `CHAR` and `VARCHAR` columns need an extra check at input time to ensure the string length is within the specified limit. Most Postgres string functions take and return `TEXT` values. String values are represented as literals in single quotes. For example, `'hello'` is a string literal. ## Example usage Consider a database tracking data for a library. We have books with titles and optional descriptions. Titles are usually of a similar length, so they can be modeled with a `CHAR` type. However, descriptions can vary significantly in length, so they are assigned the `TEXT` type. The query below creates a `books` table and inserts some sample data: ```sql CREATE TABLE books ( id SERIAL PRIMARY KEY, title CHAR(50), description TEXT ); INSERT INTO books (title, description) VALUES ('Postgres Guide', 'A comprehensive guide to PostgreSQL.'), ('Data Modeling Essentials', NULL), ('SQL for Professionals', 'An in-depth look at advanced SQL techniques.'); ``` To find books with descriptions, you can use the following query: ```sql SELECT title FROM books WHERE description IS NOT NULL; ``` This query returns the following: ```text title ---------------------------------------------------- Postgres Guide SQL for Professionals ``` ## Other examples ### String functions and operators Postgres provides various functions and operators for manipulating character data. For instance, the `||` operator concatenates strings. The query below joins the title and description columns together: ```sql SELECT title || ' - ' || description AS full_description FROM books; ``` This query returns the following: ```text full_description ---------------------------------------------------------------------- Postgres Guide - A comprehensive guide to PostgreSQL. SQL for Professionals - An in-depth look at advanced SQL techniques. ``` For more string functions and operators, see [PostgreSQL String Functions and Operators](https://www.postgresql.org/docs/current/functions-string.html). ### Pattern matching With `VARCHAR` and `TEXT`, you can use pattern matching to find specific text. The `LIKE` operator is commonly used for this purpose. ```sql SELECT id, title FROM books WHERE title LIKE 'Data%'; ``` This returns books whose titles start with "Data". ```text id | title ----+---------------------------------------------------- 2 | Data Modeling Essentials ``` ## Additional considerations - **Performance**: There are no significant performance differences between any of the types. Using fixed/limited length types, `CHAR` and `VARCHAR` can be useful for data validation. - **Function Support**: All character types support a wide range of functions and operators for string manipulation and pattern matching. ## Resources - [PostgreSQL Character Types documentation](https://www.postgresql.org/docs/current/datatype-character.html) --- # Source: https://neon.com/llms/data-types-date-and-time.txt # Postgres Date and Time data types > The document outlines the various date and time data types available in PostgreSQL, detailing their syntax, usage, and functions, specifically tailored for Neon users to manage temporal data effectively. ## Source - [Postgres Date and Time data types HTML](https://neon.com/docs/data-types/date-and-time): The original HTML version of this documentation Postgres offers a rich set of native data types for storing date and time values. Both moment-in-time and interval data can be stored, and Postgres provides a variety of functions to query and manipulate them. Modeling date and time enables precise timestamping, duration calculations, and is essential in various use cases related to finance, logistics, events logging, and so on. ## Storage and syntax There are 5 primary date/time types in Postgres: - `DATE` - represents a date value, stored as 4 bytes. Resolution is 1 day. - `TIME` - represents a time-of-day value, stored as 8 bytes. Resolution is 1 microsecond. - `TIMESTAMP` - represents a combined date and time value, stored as 8 bytes. Resolution is 1 microsecond. - `TIMESTAMPTZ` - represents a combined date and time value, along with time zone information, stored as 8 bytes. Resolution is 1 microsecond. It is stored internally as a UTC value, but is displayed in the timezone set by the client. - `INTERVAL` - represents a duration of time, stored as 16 bytes. Resolution is 1 microsecond. Optionally, you can restrict the set of values stored to a larger unit of time (e.g., `INTERVAL MONTH`). Date/time values are specified as string literals. Postgres accepts most of the standard datetime formats. For example: ```sql SELECT '2024-01-01'::DATE AS date_value, '09:00:00'::TIME AS time_value, '2024-01-01 09:00:00'::TIMESTAMP AS timestamp_value, '2024-01-01 09:00:00-05'::TIMESTAMPTZ AS timestamptz_value, '1 month'::INTERVAL AS interval_value; ``` There are also some special date/time literals that can be used in queries. Some of them are: - `epoch` - represents the Unix epoch (1970-01-01 00:00:00 UTC) - `infinity` - represents an infinite timestamp, greater than all other timestamps - `-infinity` - represents an infinite timestamp, smaller than all other timestamps - `now` - represents the current timestamp ## Example usage Consider a conference event management system that tracks schedules for planned sessions. The query below creates a table to store all the sessions and inserts some sample data. ```sql CREATE TABLE conference_sessions ( session_id SERIAL PRIMARY KEY, session_title TEXT NOT NULL, session_date DATE NOT NULL, start_time TIMESTAMPTZ NOT NULL, planned_duration INTERVAL NOT NULL, finish_time TIMESTAMPTZ ); INSERT INTO conference_sessions (session_title, session_date, start_time, planned_duration, finish_time) VALUES ('Keynote Speech', '2024-05-15', '2024-05-15 09:00:00+00', '2 hours', '2024-05-15 11:30:00+00'), ('Data Science Workshop', '2024-05-16', '2024-05-16 11:00:00+00', '3 hours', '2024-05-16 14:00:00+00'), ('AI Panel Discussion', '2024-05-17', '2024-05-17 14:00:00+00', '1.5 hours', '2024-05-17 15:20:00+00'); ``` **Filtering on date/time values** You can find all sessions scheduled for a specific date using a query like this: ```sql SELECT session_title, start_time FROM conference_sessions WHERE session_date = '2024-05-16'; ``` The query returns the following values: ```text session_title | start_time -----------------------+------------------------ Data Science Workshop | 2024-05-16 11:00:00+00 ``` **Arithmetic operations with date/time** You can write a query like this to find sessions that went over the planned duration: ```sql SELECT session_title, planned_duration, finish_time - start_time AS actual_duration FROM conference_sessions WHERE finish_time - start_time > planned_duration; ``` The query returns the following values: ```text session_title | planned_duration | actual_duration ----------------+------------------+----------------- Keynote Speech | 02:00:00 | 02:30:00 ``` **Aggregating date/time values** You can write a query like this to find the average duration of all sessions: ```sql SELECT AVG(finish_time - start_time) AS avg_duration FROM conference_sessions; ``` The query returns the following value: ```text avg_duration -------------- 02:16:40 ``` ## Other examples ### Date and time functions Postgres offers a variety of functions for manipulating date and time values, such as `EXTRACT`, `AGE`, `OVERLAPS`, and more. For example, you can run this query to see if the times for any two sessions overlapped: ```sql SELECT a.session_title AS session_a, b.session_title AS session_b, a.start_time as session_a_start, b.start_time as session_b_start FROM conference_sessions a, conference_sessions b WHERE a.session_id < b.session_id AND (a.start_time, a.planned_duration) OVERLAPS (b.start_time, b.planned_duration); ``` This query returns no rows, indicating that there are no overlapping sessions. ### Handling time zones Postgres supports adding time zone information to both time-of-day (`TIME WITH TIME ZONE`) and moment-in-time (`TIMESTAMP WITH TIME ZONE` / `TIMESTAMPTZ`) values. - If you use a time zone unaware type (e.g., `TIME` or `TIMESTAMP`), Postgres ignores any time zone information provided in the input string. - If you use a time-zone-aware type (e.g., `TIMETZ` or `TIMESTAMPTZ`), Postgres converts the input string to UTC and stores it internally. It then displays the value in the `current time zone` set for the session. To illustrate this, you can create a table with both time-zone aware and unaware columns, and insert a sample row: ```sql CREATE TABLE time_example ( ts TIMESTAMP, tstz_utc TIMESTAMPTZ, tstz_pst TIMESTAMPTZ ); INSERT INTO time_example (ts, tstz_utc, tstz_pst) VALUES ('2024-01-01 09:00:00-08', '2024-01-01 09:00:00+00', '2024-01-01 09:00:00-08'); ``` You can then check the current timezone set for the session: ```sql SHOW timezone; -- Returns 'GMT' (same as UTC) ``` Now, if you query the table: ```sql SELECT * FROM time_example; ``` This query returns the following: ```text ts | tstz_utc | tstz_pst ---------------------+------------------------+------------------------ 2024-01-01 09:00:00 | 2024-01-01 09:00:00+00 | 2024-01-01 17:00:00+00 ``` Postgres ignores the timezone information for the first column and returns the second and third columns in the UTC timezone. ## Additional considerations - **Indexing**: Date/time values often involve range queries and sorting. Indexing date/time columns can thus significantly improve query performance. - **Daylight Saving Time**: Working with time zones can be tricky, especially when dealing with daylight savings time. For additional details, refer to the [PostgreSQL Date/Time Types documentation](https://www.postgresql.org/docs/current/datatype-datetime.html). ## Resources - [PostgreSQL documentation - Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) - [PostgreSQL documentation - Date/Time Functions](https://www.postgresql.org/docs/current/functions-datetime.html) --- # Source: https://neon.com/llms/data-types-decimal.txt # Postgres Decimal data types > The document explains the use and implementation of Postgres Decimal data types within Neon, detailing their precision, scale, and appropriate use cases for handling exact numeric data. ## Source - [Postgres Decimal data types HTML](https://neon.com/docs/data-types/decimal): The original HTML version of this documentation In Postgres, decimal data types are used to represent numbers with arbitrarily high precision. They are crucial in financial applications and scientific computation, where exact precision is required for numerical calculations. ## Storage and syntax Postgres provides a single decimal/numeric type referred to as `DECIMAL` or `NUMERIC`. It offers user-defined precision and can represent numbers exactly up to a certain number of digits. The syntax for defining a decimal column is `DECIMAL(precision, scale)` or `NUMERIC(precision, scale)`, where: - `precision` is the total count of significant digits in the number (both to the left and right of the decimal point). - `scale` is the count of decimal digits in the fractional part. Declaring a column as `NUMERIC` without specifying precision and scale, stores numbers of any precision exactly (up to the implementation limit). We illustrate the behavior of `NUMERIC` with the following example: ```sql SELECT 1234.56::NUMERIC(10, 4) AS num_A, 1234.56::NUMERIC(10, 1) AS num_B, 1234.56789::NUMERIC AS num_C; ``` This query yields the following output: ```text num_a | num_b | num_c ----------+--------+------------ 1234.5600 | 1234.6 | 1234.56789 ``` The number `1234.56` is represented exactly in all three cases. However, the `NUMERIC(10, 4)` type rounds the number to 4 decimal places, while `NUMERIC(10, 1)` rounds to 1 decimal place. When no precision and scale are specified, the number is stored exactly. ## Example usage Consider a financial application managing user portfolios. Here, `DECIMAL` is ideal for storing currency values to avoid rounding errors. For example, representing the price of a stock or the total value of a portfolio. The following SQL creates a `portfolios` table: ```sql CREATE TABLE portfolios ( portfolio_id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL, stock_symbol TEXT NOT NULL, shares_owned DECIMAL(10, 4), price_per_share DECIMAL(10, 2) ); INSERT INTO portfolios (user_id, stock_symbol, shares_owned, price_per_share) VALUES (101, 'AAPL', 150.1234, 145.67), (102, 'MSFT', 200.000, 214.53); ``` ## Other examples ### Arithmetic operations Postgres allows various arithmetic operations on decimal types. These operations maintain precision and are critical in contexts where rounding errors could be costly. For example, the following query calculates the total value of each stock holding: ```sql SELECT price_per_share * shares_owned AS total_value FROM portfolios; ``` This query yields the following output: ```text total_value -------------- 21868.475678 42906.000000 ``` This query calculates the total value of each stock holding with precise decimal representation. ## Differences from floating-point It's important to differentiate `DECIMAL`/`NUMERIC` from floating-point types (`REAL`, `DOUBLE PRECISION`): - **Precision**: `DECIMAL`/`NUMERIC` types maintain exact precision, while floating-point types are approximate and can introduce rounding errors. - **Performance**: Operations on `DECIMAL`/`NUMERIC` types are generally slower than floating-point types due to the precision and complexity of calculations. ## Additional considerations - **Range and Precision**: Always define `DECIMAL`/`NUMERIC` with an appropriate range and precision based on the application's requirements. Overestimating precision can lead to unnecessary storage and performance overhead. ## Resources - [PostgreSQL documentation - Numeric Types](https://www.postgresql.org/docs/current/datatype-numeric.html) --- # Source: https://neon.com/llms/data-types-floating-point.txt # Postgres Floating-point data types > The document details the implementation and usage of floating-point data types in Neon, specifically focusing on the `float4` and `float8` types within PostgreSQL, including their precision, storage requirements, and applicable use cases. ## Source - [Postgres Floating-point data types HTML](https://neon.com/docs/data-types/floating-point): The original HTML version of this documentation In Postgres, floating point data types are used to represent numbers that might have a fractional part. These types are essential for situations where precision is key, such as scientific calculations, financial computations, and more. ## Storage and syntax Postgres supports two primary floating-point types: 1. `REAL`: Also known as "single precision," `REAL` occupies 4 bytes of storage. It offers a precision of at least 6 decimal digits. 2. `DOUBLE PRECISION`: Known as "double precision," this type uses 8 bytes of storage and provides a precision of at least 15 decimal digits. Both types are approximate numeric types, meaning they may have rounding errors and are not recommended for storing exact decimal values, like monetary data. ## Example usage For a weather data application, `REAL` might be used for storing temperature readings, where extreme precision isn't critical, as in the following example: ```sql CREATE TABLE weather_data ( reading_id SERIAL PRIMARY KEY, temperature REAL NOT NULL, humidity REAL NOT NULL ); INSERT INTO weather_data (temperature, humidity) VALUES (23.5, 60.2), (20.1, 65.3), (22.8, 58.1); ``` For more complex scientific calculations involving extensive decimal data, `DOUBLE PRECISION` would be more appropriate, as in this example: ```sql CREATE TABLE scientific_data ( measurement_id SERIAL PRIMARY KEY, precise_temperature DOUBLE PRECISION NOT NULL, co2_levels DOUBLE PRECISION NOT NULL, measurement_time TIMESTAMP WITHOUT TIME ZONE NOT NULL ); INSERT INTO scientific_data (precise_temperature, co2_levels, measurement_time) VALUES (23.456789, 415.123456789, '2024-02-03 10:00:00'), (20.123456, 417.123789012, '2024-02-03 11:00:00'), (22.789012, 418.456123789, '2024-02-03 12:00:00'); ``` ## Other examples ### Arithmetic operations Floating-point types support the standard arithmetic operations: addition, subtraction, multiplication, division, and modulus. However, operations like division might lead to potential rounding errors and precision loss. ```sql SELECT 10.0 / 3.0; ``` This query yields `3.3333333333333333`, which does not represent the quantity `10 / 3` exactly, but rather rounded to the nearest representable value. When performing a series of operations, these rounding errors can accumulate and lead to significant precision loss. ### Special Floating-point values Postgres floating-point types can represent special values like `'infinity'`, `'-infinity'`, and `'NaN'` (not a number). These values can be useful in certain mathematical or scientific computations. Consider a table named `calculations`, which might store the results of various scientific computations, including temperature changes, pressure levels, and calculation errors that could potentially result in `'infinity'`, `'-infinity'`, or `'NaN'` values: ```sql CREATE TABLE calculations ( calculation_id SERIAL PRIMARY KEY, temperature_change DOUBLE PRECISION, pressure_level DOUBLE PRECISION, error_margin DOUBLE PRECISION ); -- Inserting special floating-point values INSERT INTO calculations (temperature_change, pressure_level, error_margin) VALUES ('infinity', 101.325, 0.001), -- An example where temperature change is beyond measurable scale ('-infinity', 0.0, 0.0001), -- An example with a negative infinite value ('NaN', 101.325, 'NaN'); -- Examples of undefined results or unmeasurable quantities ``` Notice that you must use single quotes to wrap these values as shown above. ## Additional considerations - **Accuracy and rounding**: Be aware of rounding errors. For applications requiring exact decimal representation (like financial calculations), consider using `NUMERIC` or `DECIMAL` types instead. - **Performance**: While `DOUBLE PRECISION` offers more precision, it might not be as performant due to the larger storage size. ## Resources - [PostgreSQL documentation - Numeric Types](https://www.postgresql.org/docs/current/datatype-numeric.html) --- # Source: https://neon.com/llms/data-types-integer.txt # Postgres Integer data types > The document details the integer data types available in PostgreSQL, explaining their storage requirements and usage, specifically for Neon users managing database schemas. ## Source - [Postgres Integer data types HTML](https://neon.com/docs/data-types/integer): The original HTML version of this documentation In Postgres, integer data types are used for storing numerical values without a fractional component. They are useful as identifiers, counters, and many other common data modeling tasks. Postgres offers multiple integer types, catering to different ranges of values and storage sizes. ## Storage and syntax Postgres supports three primary integer types. Choosing the appropriate integer type depends on the range of data expected. 1. `SMALLINT`: A small-range integer, occupying 2 bytes of storage. It's useful for columns with a small range of values. 2. `INTEGER`: The standard integer type, using 4 bytes of storage. It's the most commonly used since it balances storage/performance efficiency and range capacity. 3. `BIGINT`: A large-range integer, taking up 8 bytes. It's used when the range of `INTEGER` is insufficient. Note that Postgres doesn't support unsigned integers. All integer types can store both positive and negative values. ## Example usage Consider a database for a small online bookstore. Here, `SMALLINT` could be used for storing the number of copies of a book in stock, while `INTEGER` would be appropriate for a unique identifier for each book. The query below creates a `books` table with these columns: ```sql CREATE TABLE books ( book_id INTEGER PRIMARY KEY, title TEXT NOT NULL, copies_in_stock SMALLINT ); INSERT INTO books (book_id, title, copies_in_stock) VALUES (1, 'War and Peach', 50), (2, 'The Great Gatsby', 20), (3, 'The Catcher in the Rye', 100); ``` ## Other examples ### Integer operations Postgres supports various arithmetic operations on integer types, including addition, subtraction, multiplication, and division. Note that the division of integers does not yield a fractional result; it truncates the result to an integer. ```sql SELECT 10 / 4; -- Yields 2, not 2.5 ``` ## Sequences and auto-Increment Postgres also provides `SERIAL`, which is a pseudo-type for creating auto-incrementing integers, often used for primary keys. It's effectively an `INTEGER` that automatically increments with each new row insertion. There is also `BIGSERIAL` and `SMALLSERIAL` for auto-incrementing `BIGINT` and `SMALLINT` columns, respectively. For example, we can create an `orders` table with an auto-incrementing `order_id` column: ```sql CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, order_details TEXT ); INSERT INTO orders (order_details) VALUES ('Order 1'), ('Order 2'), ('Order 3'); RETURNING *; ``` This query returns the following: ```text order_id | order_details ----------+--------------- 1 | Order 1 2 | Order 2 3 | Order 3 ``` The `order_id` column gets a unique integer value for each new order. ## Additional considerations - **Data integrity**: Integer types strictly store numerical values. Attempting to insert non-numeric data, or a value outside the range of that particular type will result in an error. - **Performance**: Choosing the correct integer type (`SMALLINT`, `INTEGER`, `BIGINT`) based on the expected value range can optimize storage efficiency and performance. ## Resources - [PostgreSQL documentation - Numeric Types](https://www.postgresql.org/docs/current/datatype-numeric.html) --- # Source: https://neon.com/llms/data-types-introduction.txt # Postgres data types > The document outlines the various PostgreSQL data types supported by Neon, detailing their usage and characteristics to facilitate effective database management within the Neon platform. ## Source - [Postgres data types HTML](https://neon.com/docs/data-types/introduction): The original HTML version of this documentation Get started with commonly-used Postgres data types with Neon's data type guides. For other data types that Postgres supports, visit the official Postgres [Data Types](https://www.postgresql.org/docs/current/datatype.html) documentation. - [Array](https://neon.com/docs/data-types/array): Manage collections of elements using arrays - [Boolean](https://neon.com/docs/data-types/boolean): Represent truth values in Postgres - [Date and time](https://neon.com/docs/data-types/date-and-time): Work with date and time values in Postgres - [Character](https://neon.com/docs/data-types/character): Work with text data in Postgres - [JSON](https://neon.com/docs/data-types/json): Model JSON data in Postgres - [Decimal](https://neon.com/docs/data-types/decimal): Work with exact numerical values in Postgres - [Floating point](https://neon.com/docs/data-types/floating-point): Work with float values in Postgres - [Integer](https://neon.com/docs/data-types/integer): Work with integers in Postgres - [Tsvector](https://neon.com/docs/data-types/tsvector): Optimize full-text search in Postgres with the tsvector data type - [UUID](https://neon.com/docs/data-types/uuid): Work with UUIDs in Postgres --- # Source: https://neon.com/llms/data-types-json.txt # Postgres JSON data types > The document outlines the use of JSON data types in PostgreSQL within the Neon platform, detailing how to store, query, and manipulate JSON data effectively in a Neon-managed PostgreSQL database. ## Source - [Postgres JSON data types HTML](https://neon.com/docs/data-types/json): The original HTML version of this documentation Postgres supports JSON (JavaScript Object Notation) data types, providing a flexible way to store and manipulate semi-structured data. The two types are `JSON` and `JSONB`. The functions work similarly, but there are trade-offs related to data ingestion and querying performance. `JSON` and `JSONB` are ideal for storing data that doesn't fit neatly into a traditional relational model, since new fields can be added without altering the database schema. Additionally, they can also be used to model document-like data typically stored in NoSQL databases. ## Storage and syntax ### JSON - The `JSON` data type stores `JSON` data in text format. - It preserves an exact copy of the original `JSON` input, including whitespace and ordering of object keys. - An advantage over storing `JSON` data in a `TEXT` column is that Postgres validates the `JSON` data at ingestion time, ensuring it is well-formed. ### JSONB - The `JSONB` (JSON Binary) data type stores `JSON` data in a decomposed binary format. - Unlike `JSON`, `JSONB` does not preserve whitespace or the order of object keys. For duplicate keys, only the last value is stored. - `JSONB` is more efficient for querying, as it doesn't require re-parsing the `JSON` data every time it is accessed. `JSON` values can be created from string literals by casting. For example: ```sql SELECT '{"name": "Alice", "age": 30}'::JSON as col_json, '[1, 2, "foo", null]'::JSONB as col_jsonb; ``` This query returns the following: ```text col_json | col_jsonb ------------------------------+--------------------- {"name": "Alice", "age": 30} | [1, 2, "foo", null] ``` ## Example usage Consider the case of managing user profiles for a social media application. Profile data is semi-structured, with a set of fields common to all users, while other fields are optional and may vary across users. `JSONB` is a good fit for this use case. Using the query below, we can create a table to store user profiles: ```sql CREATE TABLE user_profiles ( id SERIAL PRIMARY KEY, profile JSONB NOT NULL ); INSERT INTO user_profiles (profile) VALUES ('{"name": "Alice", "age": 30, "interests": ["music", "travel"], "settings": {"privacy": "public", "notifications": true, "theme": "light"}}'), ('{"name": "Bob", "age": 25, "interests": ["photography", "cooking"], "settings": {"privacy": "private", "notifications": false}, "city": "NYC"}'), ('{"name": "Charlie", "interests": ["music", "cooking"], "settings": {"privacy": "private", "notifications": true, "language": "English"}}'); ``` With `JSONB`, we can directly query and manipulate elements within the `JSON` structure. For example, to find all the users interested in music, we can run the query: ```sql SELECT id, profile -> 'name' as name, profile -> 'interests' as interests FROM user_profiles WHERE profile @> '{"interests":["music"]}'::JSONB; ``` The `@>` operator checks if the left `JSONB` operand contains the right `JSONB` operand as a subset. While the `->` operator extracts the value of a `JSON` key as a `JSON` value. This query returns the following: ```text id | name | interests ----+-----------+---------------------- 1 | "Alice" | ["music", "travel"] 3 | "Charlie" | ["music", "cooking"] ``` Note that the `name` values returned are still in `JSON` format. To extract the value as text, we can use the `->>` operator instead: ```sql SELECT id, profile ->> 'name' as name FROM user_profiles; ``` This query returns the following: ```text id | name ----+--------- 1 | Alice 2 | Bob 3 | Charlie ``` ## JSON functions and operators Postgres implements several functions and operators for querying and manipulating `JSON` data, including these functions described in the Neon documentation: - [json_array_elements](https://neon.com/docs/functions/json_array_elements) - [jsonb_array_elements](https://neon.com/docs/functions/jsonb_array_elements) - [json_build_object](https://neon.com/docs/functions/json_build_object) - [json_each](https://neon.com/docs/functions/json_each) - [json_extract_path](https://neon.com/docs/functions/json_extract_path) - [json_extract_path_text](https://neon.com/docs/functions/json_extract_path_text) - [json_object](https://neon.com/docs/functions/json_object) - [json_populate_record](https://neon.com/docs/functions/json_populate_record) - [json_to_record](https://neon.com/docs/functions/json_to_record) For additional `JSON` operators and functions, refer to the [official PostgreSQL documentation](https://www.postgresql.org/docs/current/functions-json.html) ### Nested data Postgres supports storing nested `JSON` values. For example, in the user profile table, the `settings` field is a `JSON` object itself. The nested values can be extracted by chaining the `->` operator. For example, to access the `privacy` setting for all users, you can run the query: ```sql SELECT id, profile -> 'name' as name, profile -> 'settings' ->> 'privacy' as privacy FROM user_profiles; ``` This query returns the following: ```text id | name | privacy ----+-----------+--------- 1 | "Alice" | public 2 | "Bob" | private 3 | "Charlie" | private ``` ### Modifying JSONB data The `JSONB` type supports updating individual fields. For example, the query below sets the `privacy` setting for all public users to `friends-only`: ```sql UPDATE user_profiles SET profile = jsonb_set(profile, '{settings, privacy}', '"friends-only"') WHERE profile -> 'settings' ->> 'privacy' = 'public'; ``` `jsonb_set` is a Postgres function that takes a `JSONB` value, a path to the field to update, and the new value. The path is specified as an array of keys. Field updates are not supported for the `JSON` type. ### Indexing JSONB data Postgres supports GIN (Generalized Inverted Index) indexes for `JSONB` data, which can improve query performance significantly. ```sql CREATE INDEX idxgin ON user_profiles USING GIN (profile); ``` This makes evaluation of `key-exists (?)` and `containment (@>)` operators efficient. For example, the query to fetch all users who have music as an interest can leverage this index. ```sql SELECT * FROM user_profiles WHERE profile @> '{"interests":["music"]}'; ``` ## Additional considerations ### JSON vs JSONB `JSONB` is the recommended data type for storing `JSON` data in Postgres for a few reasons. - **Indexing**: `JSONB` allows for the creation of GIN (Generalized Inverted Index) indexes, which makes searching within `JSONB` columns faster. - **Performance**: `JSONB` binary format is more efficient for querying and manipulating, as it doesn't require re-parsing the `JSON` data for each access. It also supports in-place updates to individual fields. - **Data integrity**: `JSONB` ensures that keys in an object are unique. There might be some legacy use cases where preserving the exact format of the `JSON` data is important. In such cases, the `JSON` data type can be used. ## Resources - [PostgreSQL documentation - JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) - [PostgreSQL documentation - JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/data-types-tsvector.txt # Postgres tsvector data type > The document explains the Postgres `tsvector` data type, detailing its structure and usage within Neon for full-text search capabilities. ## Source - [Postgres tsvector data type HTML](https://neon.com/docs/data-types/tsvector): The original HTML version of this documentation `tsvector` is a specialized Postgres data type designed for full-text search operations. It represents a document in a form optimized for text search, where each word is reduced to its root form (lexeme) and stored with information about its position and importance. In Postgres, the `tsvector` data type is useful for implementing efficient full-text search capabilities, allowing for fast and flexible searching across large volumes of text data. ## Storage and syntax A `tsvector` value is a sorted list of distinct lexemes, which are words that have been normalized to merge different variants of the same word. Each lexeme can be followed by position(s) and/or weight(s). The general syntax for a `tsvector` is: ``` 'word1':1,3 'word2':2A ... ``` Where: - `word1`, `word2`, etc., are the lexemes - `1`, `3`, etc. are integers indicating the position of the word in the document - positions can sometimes be followed by a letter to indicate a weight ('A', 'B', 'C' or 'D'), like `2A`. The default weight is 'D'. For example: - `'a':1A 'cat':2 'sat':3 'on':4 'the':5 'mat':6` When a document is cast to `tsvector`, it doesn't perform any normalization and just splits the text into lexemes. To normalize the text, you can use the `to_tsvector` function with a specific text search configuration. For example: ```sql SELECT 'The quick brown fox jumps over the lazy dog.'::tsvector as colA, to_tsvector('english', 'The quick brown fox jumps over the lazy dog.') as colB; ``` This query produces the following output. The function `to_tsvector()` tokenizes the input document and computes the normalized lexemes based on the specified text search configuration (in this case, 'english'). The output is a `tsvector` with the normalized lexemes and their positions. ```text cola | colb ----------------------------------------------------------------+------------------------------------------------------- 'The' 'brown' 'dog.' 'fox' 'jumps' 'lazy' 'over' 'quick' 'the' | 'brown':3 'dog':9 'fox':4 'jump':5 'lazi':8 'quick':2 (1 row) ``` ## Example usage Consider a scenario where we're building a blog platform and want to implement full-text search for articles. We'll use `tsvector` to store the searchable content of each article. The query below creates a table and inserts some sample blog data: ```sql CREATE TABLE blog_posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, search_vector tsvector ); INSERT INTO blog_posts (title, content) VALUES ('PostgreSQL Full-Text Search', 'PostgreSQL offers powerful full-text search capabilities using tsvector and tsquery.'), ('Indexing in Databases', 'Proper indexing is crucial for database performance. It can significantly speed up query execution.'), ('ACID Properties', 'ACID (Atomicity, Consistency, Isolation, Durability) properties ensure reliable processing of database transactions.'); UPDATE blog_posts SET search_vector = to_tsvector('english', title || ' ' || content); CREATE INDEX idx_search_vector ON blog_posts USING GIN (search_vector); ``` To search for blog posts containing specific words, we can use the match operator `@@`, with a `tsquery` search expression: ```sql SELECT title FROM blog_posts WHERE search_vector @@ to_tsquery('english', 'database & performance'); ``` This query returns the following output: ```text title ----------------------- Indexing in Databases (1 row) ``` ## Other examples ### Use different text search configurations with `tsvector` Postgres supports text search configurations for multiple languages. Here's an example using the 'spanish' configuration: ```sql CREATE TABLE product_reviews ( id SERIAL PRIMARY KEY, product_name TEXT NOT NULL, review TEXT NOT NULL, search_vector tsvector ); INSERT INTO product_reviews (product_name, review) VALUES ('Laptop XYZ', 'Este laptop es muy rápido y tiene una excelente batería.'), ('Smartphone ABC', 'La cámara del teléfono es increíble, pero la batería no dura mucho.'), ('Tablet 123', 'La tablet es ligera y fácil de usar, perfecta para leer libros.'); UPDATE product_reviews SET search_vector = to_tsvector('spanish', product_name || ' ' || review); SELECT product_name FROM product_reviews WHERE search_vector @@ to_tsquery('spanish', 'batería & (excelente | dura)'); ``` This query returns the following output: ```text product_name ---------------- Laptop XYZ Smartphone ABC (2 rows) ``` ### Rank the search results from a `tsvector` column We can use the `ts_rank` function to rank search results based on relevance: ```sql CREATE TABLE news_articles ( id SERIAL PRIMARY KEY, headline TEXT NOT NULL, body TEXT NOT NULL, search_vector tsvector ); INSERT INTO news_articles (headline, body) VALUES ('Climate Change Summit Concludes', 'World leaders agreed on new measures to combat global warming at the climate summit.'), ('New Study on Climate Change', 'Scientists publish groundbreaking research on the effects of climate change on biodiversity.'), ('Tech Giant Announces Green Initiative', 'Major tech company pledges to be carbon neutral by 2030 in fight against climate change.'); UPDATE news_articles SET search_vector = to_tsvector('english', headline || ' ' || body); SELECT headline, ts_rank(search_vector, query) AS rank FROM news_articles, to_tsquery('english', 'climate & change') query WHERE search_vector @@ query ORDER BY rank DESC; ``` This query returns the following output: ```text headline | rank ---------------------------------------+------------ New Study on Climate Change | 0.2532141 Climate Change Summit Concludes | 0.10645772 Tech Giant Announces Green Initiative | 0.09910322 (3 rows) ``` All the articles were related to climate change, but the first article was ranked higher due to the higher relevance for the search terms. ## Additional considerations - **Performance**: While `tsvector` enables fast full-text search, creating and updating `tsvector` columns can be computationally expensive. Consider using triggers or background jobs to update `tsvector` columns asynchronously. - **Storage**: `tsvector` columns can significantly increase the size of your database. Monitor your database size and consider using partial indexes if full-text search is only needed for a subset of your data. - **Language support**: PostgreSQL supports many languages out of the box, but you may need to install additional dictionaries for some languages. - **Stemming and stop words**: The text search configuration determines how words are stemmed and which words are ignored as stop words. Choose the appropriate configuration for your use case. ## Resources - [PostgreSQL Full Text Search documentation](https://www.postgresql.org/docs/current/textsearch.html) - [PostgreSQL tsvector data type documentation](https://www.postgresql.org/docs/current/datatype-textsearch.html) --- # Source: https://neon.com/llms/data-types-uuid.txt # Postgres UUID data type > The document explains the use and implementation of the UUID data type in PostgreSQL within Neon, detailing how it supports the storage and management of universally unique identifiers in databases. ## Source - [Postgres UUID data type HTML](https://neon.com/docs/data-types/uuid): The original HTML version of this documentation `UUID` stands for `Universally Unique Identifier`. A `UUID` is a 128-bit value used to ensure global uniqueness across tables and databases. In Postgres, the UUID data type is ideal for assigning unique identifiers to entities such as users, orders, or products. They are particularly useful in distributed scenarios, where the system is spread across different databases or services, and unique keys need to be generated independently. ## Storage and syntax UUIDs are stored as 128-bit values, represented as a sequence of hexadecimal digits. They are typically formatted in five groups, of sizes 8, 4, 4, 4 and 12, separated by hyphens. For example: - `123e4567-e89b-12d3-a456-426655440000`, or - `a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11` Postgres accepts UUID values in the above format, while also allowing uppercase letters and missing hyphen separators. You can also generate them using functions like `gen_random_uuid()` which is available natively in Postgres, or the `uuid_generate_v4()` function which requires the `uuid-ossp` extension. ## Example usage Consider a scenario where we track user sessions in a web application. UUIDs are commonly used to identify sessions due to their uniqueness. The query below creates a table and inserts some sample session data: ```sql CREATE TABLE sessions ( session_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id INT, activity TEXT ); INSERT INTO sessions (user_id, activity) VALUES (1, 'login'), (2, 'view'), (1, 'view'), (1, 'logout'), (3, 'write') RETURNING *; ``` This query returns the following: ```text | session_id | user_id | activity | |----------------------------------------|---------|----------| | b148aab2-5a03-4d96-a119-c32fc8a4bfaa | 1 | login | | 72be2042-0072-4858-b090-cb27c31e44b1 | 2 | view | | e817b187-aba3-4b0d-a34e-a1d82319627c | 1 | view | | a940a06a-a8d4-4e90-a90c-d8fa096e620f | 1 | logout | | df56fbf8-1fcd-408a-a1c6-4e18e35b8349 | 3 | write | ``` To retrieve a specific session, we can query by its UUID: ```sql SELECT * FROM sessions WHERE session_id = 'e817b187-aba3-4b0d-a34e-a1d82319627c'; ``` This query returns the following: ```text | session_id | user_id | activity | |----------------------------------------|---------|----------| | e817b187-aba3-4b0d-a34e-a1d82319627c | 1 | view | ``` ## Other examples ### Using UUID column as primary key Using UUIDs as primary keys is common since the likelihood of the same UUID value being generated twice is very small. This is helpful in distributed systems or when merging data from different sources. For example, we can create a table to store products and use a UUID column as the primary key. ```sql CREATE TABLE products ( product_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name TEXT NOT NULL, price NUMERIC ); INSERT INTO products (name, price) VALUES ('Apple', 1.99), ('Banana', 2.99), ('Orange', 3.99) RETURNING *; ``` This query returns the following: ```text | product_id | name | price | |----------------------------------------|--------|-------| | ce3b39d8-1bae-4ed3-b4db-2a74658f0d85 | Apple | 1.99 | | 14c18af1-a352-45e6-976e-3c194bdc6ee8 | Banana | 2.99 | | f303866d-d08a-48a7-81c3-c30486149d87 | Orange | 3.99 | ``` ### Avoiding data leakage In systems where data security is a concern, using non-sequential IDs like UUIDs can help obscure the total number of records, preventing potential information leaks. This is in contrast to the sequential IDs provided by the `SERIAL` data type, which can inadvertently reveal information about the number of users, orders, etc. For example, the query below creates a table that tracks users of an API with some sample data: ```sql CREATE TABLE api_users ( serial_id SERIAL PRIMARY KEY, uuid_id UUID DEFAULT gen_random_uuid(), username TEXT NOT NULL ); INSERT INTO api_users (username) VALUES ('user1'), ('user2'), ('user3') RETURNING *; ``` This query returns the following: ```text | serial_id | uuid_id | username | |-----------|--------------------------------------|----------| | 1 | e5836695-f2d0-47f4-86e8-d0dbaae4031a | user1 | | 2 | d22ec671-806a-4db2-8c60-f0f8754f9b7b | user2 | | 3 | 108eb93a-071e-4407-8b78-a73aabd9e803 | user3 | ``` Notice that the `serial_id` column hints at the number of rows already present in the table. ## Additional considerations - **Randomness and uniqueness**: UUIDs are designed to be globally unique, but there's an extremely small probability of generating a duplicate UUID. If you're automatically generating UUIDs at insertion, and a duplicate UUID is generated, the insertion will fail. In the rare event that a collision occurs, applications that generate UUIDs should implement a retry mechanism. - **Performance and indexing**: UUIDs are larger than traditional integer IDs, requiring more storage space. Index structures on UUID columns therefore consume more storage as well. However, in terms of performance for read-heavy workloads, leveraging indexed UUID columns for filtering or sorting can significantly improve query performance. In this context, you have to evaluate the tradeoff between storage efficiency and query performance. - **Readability**: UUIDs are not human-readable, which can make debugging or manual inspection of data more challenging. ## Resources - [PostgreSQL UUID Type documentation](https://www.postgresql.org/docs/current/datatype-uuid.html) --- # Source: https://neon.com/llms/extensions-btree_gin.txt # The btree_gin extension > The document details the btree_gin extension for Neon, explaining its functionality in enabling GIN index support for B-tree indexable data types, enhancing query performance and indexing capabilities. ## Source - [The btree_gin extension HTML](https://neon.com/docs/extensions/btree_gin): The original HTML version of this documentation The `btree_gin` extension for Postgres provides a specialized set of **GIN operator classes** that allow common, "B-tree-like" data types to be included in **GIN indexes**. This is particularly useful for scenarios where you need to create **multicolumn GIN indexes** that combine complex data types (like arrays or JSONB) with simpler types such as integers, timestamps, or text. Ultimately, `btree_gin` helps you leverage the power of GIN for a broader range of indexing needs, optimizing queries across diverse data structures. Consider a scenario where an application needs to query blog posts based on a set of `tags` (an array) and a `publication_date` (a timestamp). The `btree_gin` extension allows for a single, optimized index to service both conditions, potentially offering significant performance gains over alternative indexing strategies. ## Enable the `btree_gin` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS btree_gin; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## `btree_gin`: Bridging index types A common challenge arises when queries require filtering on both B-tree friendly columns (e.g., `status TEXT`, `created_at TIMESTAMP`) and GIN-friendly columns (e.g., `attributes JSONB`, `tags TEXT[]`). While Postgres can use separate B-tree and GIN indexes and combine their results, this is not always the most performant approach. The `btree_gin` extension addresses this by providing GIN **operator classes** for many standard B-tree-indexable data types. These operator classes instruct the GIN indexing mechanism on how to handle these scalar types as if they were native GIN-indexable items. For instance, with `btree_gin`, a single GIN index can be defined on `(order_date TIMESTAMP, product_tags TEXT[])`. ```sql -- Create the table CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, order_date TIMESTAMP, product_tags TEXT[] ); CREATE INDEX idx_orders_date_tags ON orders USING GIN (order_date, product_tags); ``` This composite index can then be leveraged by Postgres to optimize queries filtering on both `order_date` and `product_tags` simultaneously, such as: ```sql SELECT * FROM orders WHERE order_date >= '2025-04-01' AND order_date < '2025-05-01' AND product_tags @> ARRAY['electronics']; ``` Without `btree_gin`, `order_date` could not be directly included in a GIN index in this manner. ## Usage scenarios Let's explore some practical examples of how `btree_gin` can be applied to real-world scenarios, particularly in the context of filtering and querying data efficiently. ### Filtering posts by tags and publication date Consider a `posts` table where queries frequently target posts with specific tags published within a defined timeframe. #### Table schema ```sql CREATE TABLE posts ( post_id SERIAL PRIMARY KEY, title TEXT, content TEXT, tags TEXT[], -- GIN-friendly array published_at TIMESTAMPTZ -- B-tree friendly timestamp ); INSERT INTO posts (title, tags, published_at) VALUES ('Postgres Performance Tuning', '{"postgres", "performance", "database"}', '2025-03-15 10:30:00Z'), ('Advanced Indexing Strategies', '{"sql", "indexes", "optimization"}', '2025-04-02 14:00:00Z'), ('Working with JSONB in Postgres', '{"postgres", "jsonb", "nosql"}', '2025-04-20 09:15:00Z'); ``` #### `btree_gin` index creation A composite GIN index is created to cover both `tags` and `published_at`. ```sql CREATE INDEX idx_posts_tags_published ON posts USING GIN (tags, published_at); ``` #### Example query Retrieve posts tagged 'postgres' published in April 2025. ```sql SELECT title, tags, published_at FROM posts WHERE tags @> '{"postgres"}' AND published_at >= '2025-04-01 00:00:00Z' AND published_at < '2025-05-01 00:00:00Z'; ``` The `idx_posts_tags_published` index enables Postgres to efficiently process both the array containment (`@>`) and timestamp range conditions. ### E-commerce product filtering by attributes and price In an e-commerce context, users often filter products based on dynamic attributes (e.g., stored in `JSONB`) and price ranges. #### Table schema ```sql CREATE TABLE products ( product_id SERIAL PRIMARY KEY, name TEXT, attributes JSONB, -- GIN-friendly JSONB (e.g., {"color": "red", "material": "cotton"}) price NUMERIC(10, 2) -- B-tree friendly numeric ); INSERT INTO products (name, attributes, price) VALUES ('Men''s Cotton T-Shirt', '{"color": "blue", "size": "M", "material": "cotton"}', 29.99), ('Women''s Wool Sweater', '{"color": "red", "size": "S", "material": "wool"}', 89.50), ('Unisex Denim Jeans', '{"color": "black", "size": "32/30", "material": "denim"}', 59.95); ``` #### `btree_gin` index creation ```sql CREATE INDEX idx_products_attributes_price ON products USING GIN (attributes, price); ``` #### Example query Find products made of "cotton" with a price below $50. ```sql SELECT name, attributes, price FROM products WHERE attributes @> '{"material": "cotton"}' AND price < 50.00; ``` The `idx_products_attributes_price` index facilitates efficient resolution of both the JSONB containment check and the numeric inequality. ## Important considerations and Best practices - **Write performance impact:** GIN indexes, due to their structure, generally incur a higher cost for `INSERT`, `UPDATE`, and `DELETE` operations compared to B-tree indexes. This should be a consideration for write-intensive workloads. - **Index storage size:** GIN indexes can be larger on disk than their B-tree counterparts for equivalent data. - **Query selectivity:** The benefits of `btree_gin` are most pronounced when queries filter on multiple columns included in the index, and the combined predicate is reasonably selective. - **Dedicated B-tree indexes:** For queries filtering _solely_ on a B-tree-indexable column, a dedicated B-tree index on that column typically offers superior performance. `btree_gin` is primarily for _combined_ criteria. ## Conclusion The `btree_gin` extension provides a valuable mechanism for optimizing complex queries in Postgres that involve filters across both GIN-indexable and B-tree-indexable column types. By enabling the creation of unified multi-column GIN indexes, `btree_gin` can lead to more efficient query plans, reduced execution times, and a simplified indexing landscape for specific workloads. ## Resources - [PostgreSQL `btree_gin` documentation](https://www.Postgres.org/docs/current/btree-gin.html) - [PostgreSQL Indexes](https://neon.com/postgresql/postgresql-indexes) - [PostgreSQL Index Types](https://neon.com/postgresql/postgresql-indexes/postgresql-index-types) --- # Source: https://neon.com/llms/extensions-btree_gist.txt # The btree_gist extension > The document details the btree_gist extension for Neon, enabling support for B-tree indexing within GiST, facilitating complex queries and indexing capabilities. ## Source - [The btree_gist extension HTML](https://neon.com/docs/extensions/btree_gist): The original HTML version of this documentation The `btree_gist` extension for Postgres provides a specialized set of **GiST operator classes**. These allow common, "B-tree-like" data types (such as integers, text, or timestamps) to be included in **GiST (Generalized Search Tree) indexes**. This is especially useful when you need to create **multicolumn GiST indexes** that combine GiST-native types (like geometric data or range types) with these simpler B-tree types. `btree_gist` also plays a key role in defining **exclusion constraints** involving standard data types. For example, if an application needs to query for events happening within a specific geographic area (a `geometry` type) _and_ within a certain `event_time` (a timestamp), `btree_gist` allows a single, optimized GiST index to cover both conditions. ## Enable the `btree_gist` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS btree_gist; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## `btree_gist`: Combining index strengths When working with geospatial data or range types, GiST indexes are often the go-to choice due to their ability to efficiently handle complex data structures. However, many applications also rely on standard B-tree-friendly columns for filtering and sorting. More often than not, queries need to filter on both GiST-friendly columns (e.g., `location GEOMETRY`, `booking_period TSTZRANGE`) and B-tree friendly columns (e.g., `status TEXT`, `created_at TIMESTAMPTZ`, `item_id INTEGER`). While Postgres can use separate indexes, a combined index can be more efficient. The `btree_gist` extension facilitates this by providing GiST **operator classes** for many standard B-tree-indexable data types. These operator classes tell the GiST indexing mechanism how to handle these scalar types within its framework. For instance, with `btree_gist` (and often `postgis` for geometry types), a single GiST index can be defined on `(event_location GEOMETRY, event_timestamp TIMESTAMPTZ)`. **Example:** ```sql -- Ensure postgis extension is enabled CREATE EXTENSION IF NOT EXISTS postgis; -- For GEOMETRY type -- Create the table CREATE TABLE scheduled_events ( event_id SERIAL PRIMARY KEY, event_location GEOMETRY(Point, 4326), -- A GiST-friendly type event_timestamp TIMESTAMPTZ -- A B-tree-friendly type ); CREATE INDEX idx_events_location_time ON scheduled_events USING GIST (event_location, event_timestamp); ``` This composite index can then be used by Postgres to optimize queries filtering on both `event_location` and `event_timestamp` simultaneously: ```sql SELECT * FROM scheduled_events WHERE ST_DWithin(event_location, ST_SetSRID(ST_MakePoint(-73.985, 40.758), 4326)::geography, 1000) -- Within 1km AND event_timestamp >= '2025-03-01 00:00:00Z' AND event_timestamp < '2025-04-01 00:00:00Z'; ``` Without `btree_gist`, `event_timestamp` could not be directly included in the GiST index alongside `event_location` in this straightforward manner. ## Usage scenarios Let's explore practical examples where `btree_gist` is beneficial. ### Filtering events by location and time Consider a `map_events` table where queries often search for events in a specific geographical bounding box and within a particular date range. #### Table schema ```sql -- Ensure PostGIS is enabled -- CREATE EXTENSION IF NOT EXISTS postgis; CREATE TABLE map_events ( id SERIAL PRIMARY KEY, name TEXT, geom GEOMETRY(Point, 4326), -- GiST-friendly spatial data event_date DATE -- B-tree friendly date ); INSERT INTO map_events (name, geom, event_date) VALUES ('Music Festival', ST_SetSRID(ST_MakePoint(-0.1276, 51.5074), 4326), '2025-02-20'), ('Art Exhibition', ST_SetSRID(ST_MakePoint(-0.1200, 51.5000), 4326), '2025-02-22'), ('Tech Conference', ST_SetSRID(ST_MakePoint(2.3522, 48.8566), 4326), '2025-03-05'); ``` #### `btree_gist` index creation A composite GiST index covers both `geom` and `event_date`. ```sql CREATE INDEX idx_map_events_geom_date ON map_events USING GIST (geom, event_date); ``` #### Example query Find events in London (approximated by a bounding box) occurring in February 2025: ```sql SELECT name, event_date FROM map_events WHERE geom && ST_MakeEnvelope(-0.5, 51.25, 0.3, 51.7, 4326) -- Approximate bounding box for London AND event_date >= '2025-02-01' AND event_date < '2025-03-01'; ``` The `idx_map_events_geom_date` index allows Postgres to efficiently process both the spatial overlap (`&&`) and the date range conditions. ### Enforcing exclusion constraints for room bookings `btree_gist` is essential for creating exclusion constraints that involve B-tree types alongside GiST-native types like ranges. This is particularly useful in scenarios like room bookings, where you want to ensure that no two bookings overlap for the same room. #### Table schema ```sql CREATE TABLE room_bookings ( booking_id SERIAL PRIMARY KEY, room_id INTEGER, -- B-tree friendly integer booking_period TSTZRANGE -- GiST-friendly range type ); ``` #### `btree_gist` index creation for exclusion constraint The exclusion constraint uses a GiST index. `room_id WITH =` will use `btree_gist`. ```sql ALTER TABLE room_bookings ADD CONSTRAINT no_overlapping_bookings EXCLUDE USING GIST (room_id WITH =, booking_period WITH &&); ``` The `WITH =` operator for `room_id` leverages `btree_gist`, and `WITH &&` (overlap) is native to range types with GiST. #### Example operations ```sql -- Successful booking INSERT INTO room_bookings (room_id, booking_period) VALUES (101, '[2025-04-10 14:00, 2025-04-10 16:00)'); -- Attempting to book the same room for an overlapping period INSERT INTO room_bookings (room_id, booking_period) VALUES (101, '[2025-04-10 15:00, 2025-04-10 17:00)'); -- This will fail: ERROR: conflicting key value violates exclusion constraint "no_overlapping_bookings" -- Booking a different room for an overlapping period is fine INSERT INTO room_bookings (room_id, booking_period) VALUES (102, '[2025-04-10 15:00, 2025-04-10 17:00)'); ``` ## Important considerations and Best practices - **Use case specificity:** `btree_gist` is not a general replacement for B-tree indexes. It excels when combining B-tree types with GiST-specific types/features in one index or for exclusion constraints. - **Performance:** For queries filtering _solely_ on a B-tree-indexable column (e.g., `WHERE status = 'active'`), a dedicated B-tree index is typically faster and more space-efficient. - **Index size and write overhead:** GiST indexes can be larger and have slightly higher write overhead (for `INSERT`/`UPDATE`/`DELETE`) than B-tree indexes. ## Conclusion The `btree_gist` extension provides a vital bridge, allowing standard B-tree-indexable data types to be included in GiST indexes. This facilitates efficient multi-column queries across diverse data types (e.g., spatial and temporal) and enables the creation of sophisticated exclusion constraints. ## Resources - [PostgreSQL `btree_gist` documentation](https://www.postgresql.org/docs/current/btree-gist.html) - [PostgreSQL Indexes](https://neon.com/postgresql/postgresql-indexes) - [How and when to use btree_gist](https://neon.com/blog/btree_gist) - [PostgreSQL Index Types](https://neon.com/postgresql/postgresql-indexes/postgresql-index-types) - [`postgis` extension](https://neon.com/docs/extensions/postgis) --- # Source: https://neon.com/llms/extensions-citext.txt # The citext Extension > The document details the citext extension for Neon, which enables case-insensitive text handling in PostgreSQL databases by treating text values as case-insensitive for comparison and sorting operations. ## Source - [The citext Extension HTML](https://neon.com/docs/extensions/citext): The original HTML version of this documentation The `citext` extension in Postgres provides a case-insensitive data type for text. This is particularly useful in scenarios where the case of text data should not affect queries, such as usernames or email addresses, or any form of textual data where case-insensitivity is desired. This guide covers the `citext` extension — its setup, usage, and practical examples in Postgres. For datasets where consistent text formatting isn't guaranteed, case-insensitive queries can streamline operations. **Note**: The `citext` extension is an open-source module for Postgres. It can be easily installed and used in any Postgres database. This guide provides steps for installation and usage, with further details available in the [Postgres Documentation](https://postgresql.org/docs/current/citext.html). ## Enable the `citext` extension You can enable `citext` by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION IF NOT EXISTS citext; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Example usage **Creating a table with citext** Consider a user registration system where the user's email should be unique, regardless of case. ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(255) UNIQUE, email CITEXT UNIQUE ); ``` In this table, the `email` field is of type `citext`, ensuring that email addresses are treated case-insensitively. **Inserting data** Insert data as you would normally. The `citext` type automatically handles case-insensitivity. ```sql INSERT INTO users (username, email) VALUES ('johnsmith', 'JohnSmith@email.com'), ('AliceSmith', 'ALICE@example.com'), ('BobJohnson', 'Bob@example.com'), ('EveAnderson', 'eve@example.com'); ``` **Case-insensitive querying** Queries against `citext` columns are inherently case-insensitive. Effectively, it calls the `lower()` function on both strings when comparing two values. ```sql SELECT * FROM users WHERE email = 'johnsmith@email.com'; ``` This query returns the following: ```text | id | username | email | |----|------------|------------------------| | 1 | johnsmith | JohnSmith@email.com | ``` The email address matched even though the case was different. ## More examples **Using citext with regex functions** The `citext` extension can be used with regular expressions and other string-matching functions, which perform string matching in a case-insensitive manner. For example, the query below finds users whose email addresses start with 'AL'. ```sql SELECT * FROM users WHERE regexp_match(email, '^AL', 'i') IS NOT NULL; ``` This query returns the following: ```text | id | username | email | |----|-------------|--------------------| | 1 | AliceSmith | ALICE@example.com | ``` **Using citext data as TEXT** If you do want case-sensitive behavior, you can cast `citext` data to `text` and use it as shown here: Query: ```sql SELECT * FROM users WHERE email::text LIKE '%EVE%'; ``` This query will only return results if it finds a user with an email address containing 'EVE'. ## Benefits of Using citext - **Query simplicity**: No need for functions like `lower()` or `upper()` to perform case-insensitive comparisons. - **Data integrity**: Helps maintain data consistency, especially in user input scenarios. ## Performance considerations ### Indexing with citext Indexing `citext` fields is similar to indexing regular text fields. However, it's important to note that the index will be case-insensitive. ```sql CREATE INDEX idx_email ON users USING btree(email); ``` This index will improve the performance of queries involving the `email` field. Depending on whether the more frequent use case is case-sensitive or case-insensitive, you can choose to index the `citext` field or cast it to `text` and index that. ### Comparison with `lower()` function `Citext` internally does an operation similar to `lower()` on both sides of the comparison, so there is not a big performance jump. However, using `citext` ensures consistent case-insensitive behavior across queries without the need for repeatedly applying the `lower()` function, which makes errors less likely. ## Conclusion The `citext` extension helps manage case-insensitivity in text data within Postgres. It simplifies queries and ensures consistency in data handling. This guide provides an overview of using `citext`, including creating and querying case-insensitive fields. ## Resources - [PostgreSQL citext documentation](https://www.postgresql.org/docs/current/citext.html) --- # Source: https://neon.com/llms/extensions-cube.txt # The cube extension > The document details the cube extension for Neon, explaining its functionality for multidimensional data storage and querying within the Neon database environment. ## Source - [The cube extension HTML](https://neon.com/docs/extensions/cube): The original HTML version of this documentation The cube extension for Postgres provides a specialized data type for representing multidimensional "cubes", which are, more generally, n-dimensional boxes or points. This makes it useful for applications dealing with multidimensional data, such as geographic information systems (GIS) storing coordinates (latitude, longitude, altitude), business intelligence (BI) applications analyzing data across various dimensions, or scientific computing tasks involving vector operations. The cube extension allows you to define points and hyperrectangles in n-dimensional space and perform various operations like distance calculations, containment checks, and overlap detection. ## Enable the `cube` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS cube; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Understanding `cube` data The `cube` extension primarily revolves around the `cube` data type and a set of operators and functions to work with it. The `cube` data type can represent both n-dimensional points (cubes with zero volume) and n-dimensional cubes (defined by two opposite corners). ### Multidimensional points A point is a cube where the "lower-left" and "upper-right" corners are identical. Syntax: - `cube(ARRAY[x1, x2, ..., xn])` - `'(x1, x2, ..., xn)'::cube` ```sql -- A 3-dimensional point SELECT cube(ARRAY[1.0, 2.5, 3.0]) AS point_from_array; -- Result: (1,2.5,3) SELECT '(1.0, 2.5, 3.0)'::cube AS point_from_string; -- Result: (1,2.5,3) ``` ### Multidimensional cubes (ranges/boxes) A cube is defined by two diagonally opposite corner points. The order of corners doesn't matter on input, `cube` internally stores them in a canonical "lower-left" to "upper-right" form. Syntax: - `cube(ARRAY[ll_x1, ..., ll_xn], ARRAY[ur_x1, ..., ur_xn])` - `'(ll_x1, ..., ll_xn), (ur_x1, ..., ur_xn)'::cube` ```sql -- A 2-dimensional cube (a rectangle) SELECT cube(ARRAY[1.0, 1.0], ARRAY[5.0, 5.0]) AS cube_from_arrays; -- Result: (1,1),(5,5) SELECT ' (1.0, 1.0), (5.0, 5.0) '::cube AS cube_from_string; -- Whitespace is ignored -- Result: (1,1),(5,5) -- A 3-dimensional cube SELECT cube(ARRAY[0,0,0], ARRAY[1,1,1]) AS unit_cube_3d; -- Result: (0,0,0),(1,1,1) ``` Cube values are stored internally as 64-bit floating-point numbers. ## Example usage Let's consider a table to store information about various items, including their spatial bounding boxes or specific point locations. ### Creating a table with a `cube` column ```sql CREATE TABLE items ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, location_or_bounds CUBE ); ``` ### Inserting `cube` data ```sql INSERT INTO items (name, location_or_bounds) VALUES ('Sensor A', '(10.0, 20.5, 5.0)'), -- A 3D point location ('Warehouse Zone 1', '(0,0),(50,100)'), -- A 2D rectangular area ('Temperature Range', '(-10.0), (40.0)'), -- A 1D cube/interval ('Sensor B', '(-10.0, 40.0)'), -- A 2D point ('Pressure Sensor', cube(array[15.2, 30.1, 2.3])), -- Another 3D point ('Shipping Box X', cube(array[0,0,0], array[1,2,1.5])); -- A 3D box ``` ### Querying `cube` data The `cube` extension provides a rich set of operators and functions for querying. **Direct creation in SELECT statements** You can create `cube` values on the fly: ```sql SELECT cube(array[1,2,3]) AS point_3d, cube(0,10) AS interval_1d; ``` Output: ```text | point_3d | interval_1d | |-------------|---------------| | (1, 2, 3) | (0),(10) | ``` **Containment and overlap operators** - `@>`: Contains (does the left cube contain the right cube?) - `<@`: Is contained by (is the left cube contained by the right cube?) - `&&`: Overlaps (do the two cubes have any common points?) ```sql -- Find items within a specific 2D region '(5,5),(60,120)' SELECT name, location_or_bounds FROM items WHERE location_or_bounds <@ '(5,5),(60,120)'; -- No rows returned (as no item is fully contained in this region) -- Find 2D/3D regions that contain the point (12.0, 25.0) SELECT name, location_or_bounds FROM items WHERE location_or_bounds @> '(12.0, 25.0)'; -- Warehouse Zone 1 | (0,0),(50,100) -- Find items whose bounds overlap with the 3D cube '(0,0,0),(10,10,10)' SELECT name, location_or_bounds FROM items WHERE location_or_bounds && '(0,0,0),(10,10,10)'; -- Warehouse Zone 1 | (0,0),(50,100) -- Temperature Range | (-10),(40) -- Shipping Box X | (0,0,0),(1,2,1.5) ``` > Notice that 'Sensor B' is excluded from the last query's results, while 'Temperature Range' (a 1D interval) is included. 'Sensor B' (a 2D point) does not overlap the 3D query cube, but 'Temperature Range' does when its dimensionality is considered. **Distance operators** The `cube` extension provides several distance metrics: - `<->`: Euclidean distance - `<#>`: Taxicab (Manhattan or L-1) distance - `<=>`: Chebyshev (L-infinity or maximum coordinate) distance **Note** Distance metrics: | Distance | Description | |----------|-------------| Euclidean | The straight-line distance between two points in n-dimensional space. | | Taxicab | The sum of the absolute differences of their coordinates. This is the distance a taxi would travel on a grid-like street layout. | | Chebyshev | The maximum absolute difference between the coordinates of the two points. This is useful in chess-like movements where diagonal moves are allowed. | The `cube_distance(cube1, cube2)` function is equivalent to the `<->` operator. ```sql SELECT cube_distance('(0,0)'::cube, '(3,4)'::cube) AS euclidean_dist; -- Output: 5 (sqrt(3^2 + 4^2)) SELECT '(0,0)'::cube <-> '(3,4)'::cube AS euclidean_dist; -- Output: 5 (sqrt(3^2 + 4^2)) SELECT '(0,0,0)'::cube <#> '(1,2,3)'::cube AS taxicab_dist; -- Output: 6 (1+2+3) SELECT '(0,0)'::cube <=> '(3,-4)'::cube AS chebyshev_dist; -- Output: 4 (max(|3-0|, |-4-0|)) ``` **Note** Cube Creation: String vs. Function: Be mindful of how you create `cube` values, as it impacts their dimensionality: * `'(x,y)'::cube` (string casting) creates a **2D point** `(x,y)`. * `cube(x,y)` (function call) creates a **1D interval** from `x` to `y`, effectively `(x),(y)`. This difference will affect functions like `cube_distance`. For example: * `cube_distance('(0,0)'::cube, '(3,4)'::cube)` is 5 (distance between 2D points). * `cube_distance(cube(0,0), cube(3,4))` is 3 (distance between 1D point `(0)` and 1D interval `(3),(4)`). * `cube(0,0)` is a 1D point as it has both lower and upper bounds at 0. To create an n-dimensional point using the `cube()` function, pass an array: `cube(array[x,y,...])`. **Coordinate extraction operators** - `-> integer`: Extracts the N-th coordinate of a point. Returns `NULL` if the cube is not a point or has fewer than N dimensions. - `~> integer`: Extracts coordinate from a cube's representation. - `N = 2*k - 1`: Lower bound of the k-th dimension. - `N = 2*k`: Upper bound of the k-th dimension. ```sql -- Extract coordinates from Sensor A's location (a point) SELECT location_or_bounds -> 1 AS x, location_or_bounds -> 2 AS y, location_or_bounds -> 3 AS z FROM items WHERE name = 'Sensor A'; -- x | y | z -- 10 | 20.5 | 5 -- Extract bounds of Warehouse Zone 1 (a 2D cube) -- x_low (dim 1, lower): ~> 1 -- x_high (dim 1, upper): ~> 2 -- y_low (dim 2, lower): ~> 3 -- y_high (dim 2, upper): ~> 4 SELECT location_or_bounds ~> 1 AS x_low, location_or_bounds ~> 2 AS x_high, location_or_bounds ~> 3 AS y_low, location_or_bounds ~> 4 AS y_high FROM items WHERE name = 'Warehouse Zone 1'; -- x_low | x_high | y_low | y_high -- 0 | 50 | 0 | 100 ``` ## Functions and operators ### Utility functions - `cube_dim(cube)`: Returns the number of dimensions of the cube. - `cube_is_point(cube)`: Returns `true` if the cube is a point (zero volume), `false` otherwise. ```sql SELECT cube_dim('(1,2,3)'::cube); -- Result: 3 SELECT cube_is_point('(1,2,3)'::cube); -- Result: true SELECT cube_is_point('(1,2)'::cube); -- Result: true SELECT cube_is_point(cube(1,2)); -- Result: false (cube function creates a 1D interval) SELECT cube_dim(cube(1,2)); -- Result: 1 (1D interval) SELECT cube_is_point(cube(ARRAY[1,2])); -- Result: true (array creates a 2D point) SELECT cube_dim(cube(ARRAY[1,2])); -- Result: 2 (2D point) SELECT cube_is_point('(1),(2)'::cube); -- Result: false ``` ### Coordinate functions - `cube_ll_coord(cube, N)`: Returns the N-th coordinate of the lower-left corner. - `cube_ur_coord(cube, N)`: Returns the N-th coordinate of the upper-right corner. ```sql -- Get y-coordinate of lower-left corner for Warehouse Zone 1 SELECT cube_ll_coord(location_or_bounds, 2) AS y_ll FROM items WHERE name = 'Warehouse Zone 1'; -- Result: 0 -- Get x-coordinate of upper-right corner for Shipping Box X SELECT cube_ur_coord(location_or_bounds, 1) AS x_ur FROM items WHERE name = 'Shipping Box X'; -- Result: 1 ``` ### Union and Intersection - `cube_union(cube1, cube2)`: Returns the smallest cube enclosing both input cubes. - `cube_inter(cube1, cube2)`: Returns the intersection of two cubes. Returns `NULL` if they don't intersect. ```sql SELECT cube_union('(0,0),(2,2)', '(1,1),(3,3)') AS union_result; -- Output: (0,0),(3,3) SELECT cube_inter('(0,0),(2,2)', '(1,1),(3,3)') AS intersection_result; -- Output: (1,1),(2,2) ``` ### Enlarging cubes `cube_enlarge(c_in cube, r double precision, n_dims integer)`: Enlarges (or shrinks if `r` is negative) the input cube `c_in` by radius `r` in its first `n_dims` dimensions. If `n_dims` is greater than `c_in`'s dimensions and `r > 0`, new dimensions are added with `(-r, r)` ranges. ```sql -- Enlarge a 2D point (0,0) by radius 1 in 2 dimensions SELECT cube_enlarge('(0,0)', 1.0, 2); -- Output: (-1,-1),(1,1) SELECT cube_enlarge('(0,0)', 1.0, 3); -- Output: (-1,-1,-1),(1,1,1) -- Enlarge a 1D cube (0),(2) by 0.5, extending to 2 dimensions SELECT cube_enlarge('(0),(2)'::cube, 0.5, 2); -- Output: (-0.5,-0.5),(2.5,0.5) ``` ### Creating cubes from subsets of dimensions `cube_subset(target_cube cube, dim_indices integer[])`: Creates a new cube using only the dimensions specified by `dim_indices` from the `target_cube`. ```sql SELECT cube_subset('(1,2,3),(4,5,6)', ARRAY[1,3]) AS subset_cube; -- Output: (1,3),(4,6) (extracts 1st and 3rd dimensions) SELECT cube_subset('(1,2,3),(4,5,6)', ARRAY[3]) AS subset_cube; -- Output: (3),(6) (extracts only the 3rd dimension) ``` ## Indexing `cube` data For efficient querying of `cube` data, especially on large tables, [GiST indexes](https://neon.com/postgresql/postgresql-indexes/postgresql-index-types#gist-indexes) are highly recommended. They can make queries faster when using operators like `&&`, `@>`, `<@`, and the distance operators. ```sql CREATE INDEX idx_items_location_bounds_gist ON items USING GIST (location_or_bounds); ``` **Nearest neighbor searches** GiST indexes enable efficient nearest neighbor searches using the distance operators in an `ORDER BY` clause: ```sql -- Find the 3 items closest to the point (5,5,5) SELECT name, location_or_bounds, location_or_bounds <-> '(5,5,5)'::cube AS distance FROM items ORDER BY location_or_bounds <-> '(5,5,5)'::cube LIMIT 3; -- Warehouse Zone 1 | (0,0),(50,100) | 5.0 -- Shipping Box X | (0,0,0),(1,2,1.5) | 6.103277807866851 -- Temperature Range | (-10),(40) | 7.0710678118654755 ``` ## Practical applications 1. **Geographic Information systems (GIS)**: - Storing latitude/longitude/altitude points. - Defining bounding boxes for map features. 2. **Business Intelligence (BI) / OLAP**: - Representing data points in a multidimensional space (e.g., sales by `product_category_id`, `region_id`, `time_id`). - Filtering data based on ranges in multiple dimensions. 3. **Scientific computing**: Storing points or regions in n-dimensional parameter spaces for experiments or simulations. 4. **Time-series data with multidimensional attributes**: Storing sensor readings where each reading has multiple values (e.g., temperature, humidity, pressure) at a specific time. **Example:** ```sql CREATE TABLE sensor_log ( ts TIMESTAMPTZ NOT NULL, device_id INT, metrics CUBE -- e.g., (temperature, humidity, pressure) ); INSERT INTO sensor_log (ts, device_id, metrics) VALUES (NOW(), 101, '(22.5, 55.2, 1013.1)'); -- Find logs where temperature (1st dim) was between 20-25 -- and humidity (2nd dim) was between 50-60 SELECT * FROM sensor_log WHERE metrics <@ cube(array[20,50,-1e6], array[25,60,1e6]); -- We keep 3rd dim a large range ``` ## Conclusion The `cube` extension provides a powerful and versatile data type for handling multidimensional data within Postgres. Its specialized operators and functions, combined with GiST indexing, enable efficient storage, querying, and analysis of n-dimensional points and regions. This makes it a valuable tool for a wide range of applications, from GIS to scientific computing and beyond. ## Resources - [PostgreSQL `cube` documentation](https://www.postgresql.org/docs/current/cube.html) - Distances: - [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) - [Taxicab/Manhattan geometry](https://en.wikipedia.org/wiki/Taxicab_geometry) - [Chebyshev distance](https://en.wikipedia.org/wiki/Chebyshev_distance) --- # Source: https://neon.com/llms/extensions-dblink.txt # The dblink extension > The document details the dblink extension for Neon, enabling users to connect and execute queries across different PostgreSQL databases from within a Neon database environment. ## Source - [The dblink extension HTML](https://neon.com/docs/extensions/dblink): The original HTML version of this documentation The `dblink` extension provides the ability to connect to other Postgres databases from within your current database. This is invaluable for tasks such as data integration, cross-database querying, and building applications that span multiple database instances. `dblink` allows you to execute queries on these remote databases and retrieve the results directly into your Neon project. This guide will walk you through the fundamentals of using the `dblink` extension in your Neon project. You'll learn how to enable the extension, establish connections to remote Postgres databases, execute queries against them, and retrieve the results. We'll explore different connection methods and discuss important considerations for using `dblink` effectively. **Note**: `dblink` is a core Postgres extension and can be enabled on any Neon project. It allows direct connections to other Postgres databases. For a more structured and potentially more secure way to access data in external data sources (including non-Postgres databases), consider using [Foreign Data Wrappers](https://neon.com/docs/extensions/postgres_fdw). **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Enable the `dblink` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS dblink; ``` ## Connecting to a remote database The `dblink` extension provides the `dblink_connect` function to establish connections to remote Postgres databases. You can connect by providing the connection details directly in the function call or by using a named connection that you can reference in subsequent queries. The most direct way to connect is by providing a connection string. This string includes all the necessary information to connect to the remote database. ### Named connections To establish a named connection using `dblink_connect`, use the following syntax: ```sql SELECT dblink_connect('my_remote_db', 'host=my_remote_host port=5432 dbname=my_remote_database user=my_remote_user password=my_remote_password sslmode=require&channel_binding=require'); ``` In this example: - `'my_remote_db'` is a name you assign to this connection for later use. - The connection string specifies the host, port, database name, user, password, and SSL mode of the remote Postgres instance. **Replace these placeholders with your actual remote database credentials.** - `sslmode=require` is recommended for security to ensure an encrypted connection. You should receive a response like: ```text dblink_connect ---------------- OK (1 row) ``` ### Unnamed connections You can also connect without naming the connection. This is useful for one-off queries or when you don't need to reference the connection in subsequent queries. ```sql SELECT dblink_connect('host=my_remote_host port=5432 dbname=my_remote_database user=my_remote_user password=my_remote_password sslmode=require&channel_binding=require'); ``` **Tip** Did you know?: Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection will persist until closed or until the database session is ended. ## Executing queries on the remote database Once a connection is established, you can use the `dblink` function to execute queries on the remote database. ### With Named connections ```sql SELECT * FROM dblink('my_remote_db', 'SELECT table_name FROM information_schema.tables WHERE table_schema = ''public''') AS remote_tables(table_name TEXT); ``` In this example: - `'my_remote_db'` refers to the connection name established earlier. - `'SELECT table_name FROM information_schema.tables WHERE table_schema = 'public''` is the SQL query you want to execute on the remote database. - `AS remote_tables(table_name TEXT)` defines the structure of the returned data, specifying the column name (`table_name`) and its data type (`TEXT`). **This is crucial as `dblink` needs to know the expected structure of the results.** You should receive a list of tables from the `public` schema of the remote database. ### With Unnamed connections When using an unnamed connection, you can execute queries directly without referencing a named connection. ```sql SELECT * FROM dblink('host=my_remote_host port=5432 dbname=my_remote_database user=my_remote_user password=my_remote_password sslmode=require&channel_binding=require', 'SELECT table_name FROM information_schema.tables WHERE table_schema = ''public''') AS remote_tables(table_name TEXT); ``` ## Retrieving data from the remote database The results of the remote query are returned as a set of rows. You can use standard SQL to further process or integrate this data within your Neon database. ```sql SELECT rt.table_name FROM dblink('my_remote_db', 'SELECT table_name FROM information_schema.tables WHERE table_schema = ''public''') AS rt(table_name TEXT) WHERE rt.table_name LIKE 'user%'; ``` This query retrieves the names of tables in the remote database that start with "user". ```sql SELECT * FROM dblink('my_remote_db', 'SELECT id, user_id, task, is_complete, inserted_at FROM todos') AS rows(id int, user_id TEXT, task TEXT, is_complete BOOLEAN, inserted_at text); ``` This query retrieves the rows from a `todos` table in the remote database. ## Closing connections It's good practice to close connections when you're finished with them to free up resources. Use the `dblink_disconnect` function. ```sql SELECT dblink_disconnect('my_remote_db'); ``` To disconnect from an unnamed connection, you can use the following: ```sql SELECT dblink_disconnect(); ``` ## Using Named Connections for convenience Naming your connections with `dblink_connect` can simplify your queries, especially if you frequently access the same remote database. ```sql -- Connect with a name SELECT dblink_connect('production_db', 'host=prod_host port=5432 dbname=prod_data user=reporter password=securepass sslmode=require&channel_binding=require'); -- Execute queries using the named connection SELECT * FROM dblink('production_db', 'SELECT count(*) FROM orders') AS order_count(count int); -- Disconnect SELECT dblink_disconnect('production_db'); ``` ## Practical Examples ### Data Synchronization: You can use `dblink` to periodically pull data from a remote database into your Neon project for reporting or analysis. ```sql -- Using dblink to insert data from a remote table INSERT INTO local_staging_table (col1, col2) SELECT remote_col1, remote_col2 FROM dblink('remote_db', 'SELECT col1, col2 FROM remote_table') AS rt(remote_col1 INTEGER, remote_col2 TEXT); ``` ### Cross-Database reporting: Generate reports that combine data from your Neon database and one or more remote Postgres databases. ```sql SELECT l.customer_name, r.order_total FROM customers l JOIN dblink('orders_db', 'SELECT customer_id, sum(amount) AS order_total FROM orders GROUP BY customer_id') AS r(customer_id INTEGER, order_total NUMERIC) ON l.customer_id = r.customer_id; ``` ## Advanced `dblink` functions The `dblink` extension provides additional functions to help manage and interact with remote databases: - **`dblink_get_connections()`:** This function is helpful for monitoring and managing your `dblink` connections. It returns a list of the names of all currently open, named `dblink` connections in the current session. This can be useful for troubleshooting or ensuring connections are being managed correctly. Tab: SQL ```sql SELECT * FROM dblink_get_connections(); ``` Tab: bash ```bash dblink_get_connections ------------------ {my_remote_db} ``` - **`dblink_error_message(TEXT connname)`:** When working with remote databases, errors can occur. This function allows you to retrieve the last error message associated with a specific named `dblink` connection. This is invaluable for debugging issues that arise during remote queries. - **`dblink_send_query(TEXT connname, text sql)`:** This function sends a query to a named `dblink` connection without waiting for the result. This is useful for executing long-running queries on the remote database without blocking the current session. The return value is 1 if the query was successfully dispatched, or 0 otherwise. - **`dblink_get_result(TEXT connname)`:** This function retrieves the result of a query that was previously sent using `dblink_send_query`. It returns the result set as a set of rows, allowing you to process the data as needed. - **`dblink_cancel_query(TEXT connname)`:** This function tries to cancel the currently executing query on a named `dblink` connection. This can be useful if you need to stop a long-running query that is consuming resources on the remote database. The return value is 'OK' if the query was successfully canceled, or the error message as a text otherwise. ## Security considerations - **Credentials:** Using `dblink` is inherently less secure than other methods of accessing remote data, as it requires storing credentials in the connection strings. For this reason, it may be preferable to use Foreign Data Wrappers or other secure methods. - **Network Security:** Ensure that network access is properly configured to allow connections between your Neon project and the remote database server. Firewalls and security groups might need adjustments. - **`sslmode`:** Always use `sslmode=require&channel_binding=require` in your connection strings to encrypt communication and ensure enhanced security against man-in-the-middle attacks. - **Principle of Least Privilege:** Grant only the necessary permissions to the `dblink` connecting user on the remote database. ## Better alternatives: Foreign Data Wrappers While `dblink` provides direct connectivity, Postgres' Foreign Data Wrappers (FDW) offer a more integrated and often more manageable approach for accessing external data. The `postgres_fdw` allows you to define a _foreign server_ and _foreign tables_ that represent tables in the remote database. You can learn more about FDWs in our [postgres_fdw](https://neon.com/docs/extensions/postgres_fdw) guide. ## Conclusion The `dblink` extension provides a powerful mechanism for connecting to and querying remote Postgres databases from your Neon project. Whether you need to perform one-off data pulls or build complex cross-database applications, `dblink` offers the flexibility to execute arbitrary queries on remote instances. Remember to prioritize security when managing connections and credentials. For more structured and potentially more secure access, consider exploring the capabilities of Foreign Data Wrappers. ## Reference - [PostgreSQL `dblink` Documentation](https://www.postgresql.org/docs/current/dblink.html) - [PostgreSQL Foreign Data Wrappers](https://www.postgresql.org/docs/current/postgres-fdw.html) --- # Source: https://neon.com/llms/extensions-dict_int.txt # The dict_int extension > The dict_int extension documentation explains how to use the dict_int extension in Neon to efficiently map integer keys to integer values within PostgreSQL databases. ## Source - [The dict_int extension HTML](https://neon.com/docs/extensions/dict_int): The original HTML version of this documentation [Postgres Full-Text Search (FTS)](https://neon.com/postgresql/postgresql-indexes/postgresql-full-text-search) is a powerful tool for searching through textual data. However, when this data includes a significant number of integers like product IDs, serial numbers, or document codes, default FTS behavior can sometimes lead to inefficient indexes and slower search performance. The `dict_int` extension is designed to address this issue by providing a specialized dictionary template that optimizes how integers are tokenized and indexed. This can lead to more compact indexes and faster, more relevant searches. Imagine searching a vast product catalog for "ID 1234567890". Without `dict_int`, FTS might break this number down in various ways, or index the entire long string, potentially creating many unique terms that aren't always useful for searching and can bloat the index. `dict_int` allows you to define rules for how these numbers are processed, ensuring they are handled efficiently. ## Enable the `dict_int` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS dict_int; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Understanding `dict_int` The `dict_int` dictionary template offers precise control over how integer strings are tokenized for full-text search. This is achieved through three key parameters that you can tailor when creating a dictionary to optimize index size and ensure that searches for numbers are both efficient and relevant to your needs. These settings dictate how numbers are processed before they even make it into the search index. ### `maxlen` This parameter sets the maximum number of digits an integer token is allowed to have. Its default value when creating a dictionary is `6`. - When an integer token processed by the dictionary has more digits than `maxlen`, it will either be shortened or discarded entirely, depending on the `rejectlong` setting. - The primary purpose of `maxlen` is to help prevent extremely long, and often less significant, numerical sequences from consuming valuable index space and potentially slowing down searches. For example, if `maxlen` is `5` in your custom dictionary and it encounters the number `1234567` (and `rejectlong` is `false`), it will be processed as `12345`. ### `rejectlong` This parameter determines what happens to an integer token that exceeds the number of digits specified by `maxlen`. The default for `rejectlong` is `false`. - If `rejectlong` is `false`, integers longer than `maxlen` are truncated. This means only their initial `maxlen` digits are kept and indexed. - If `rejectlong` is set to `true`, these overlength integers are instead treated as "stop words." This effectively means they are entirely discarded by the dictionary, will not be indexed, and therefore cannot be found via full-text search. Choosing `rejectlong = true` can be beneficial if very long numbers in your dataset are generally considered noise or are irrelevant to your typical search use cases, as it helps keep the index leaner. Conversely, if the leading portion of a long number is still important for searching, keeping `rejectlong = false` is the appropriate choice. ### `absval` The `absval` parameter controls the handling of leading positive (`+`) or negative (`-`) signs in integer tokens. By default, `absval` is `false`. - When `absval` is `false`, any leading signs are typically preserved as part of the token. - If `absval` is set to `true`, any leading `+` or `-` signs are stripped from the integer _before_ the `maxlen` logic is applied. For example, if `absval` is `true` in your custom dictionary, both `-12345` and `+12345` would be normalized to `12345`. This feature is very useful when the sign of a number isn't relevant for your search criteria, allowing, for instance, a search for `ID 789` to successfully match entries like `ID: -789` or `REF: +789` without needing to account for the sign explicitly in the search query. ## Using `dict_int` The `dict_int` extension provides a template for creating custom integer dictionaries. This allows you to define how integers are processed during full-text search indexing and querying. A default dictionary named `intdict` is provided, which has default parameters set to `maxlen = 6`, `rejectlong = false`, and `absval = false`. **Important** Modifying the default intdict dictionary on Neon: The default `intdict` dictionary is owned by superuser. On Neon, you do not have permissions to directly `ALTER` this default dictionary, which can result in an "ERROR: must be owner of text search dictionary intdict". The recommended approach is to **create your own custom dictionary** from the `intdict_template`. This gives you full control over its parameters ### Creating and configuring a custom integer dictionary You can create new dictionaries from the `intdict_template` and specify your desired parameters. #### Example: Create dictionary named `my_custom_intdict` with `maxlen` set to `4`, `rejectlong` to `true`, and `absval` to `true`: ```sql CREATE TEXT SEARCH DICTIONARY my_custom_intdict ( TEMPLATE = intdict_template, MAXLEN = 4, REJECTLONG = true, ABSVAL = true ); ``` If you need to change its parameters later, you can `ALTER` the dictionary: ```sql ALTER TEXT SEARCH DICTIONARY my_custom_intdict ( MAXLEN = 3, REJECTLONG = false, ABSVAL = false ); ``` ### Utilizing with `ts_lexize` The `ts_lexize` function is used for testing how a dictionary processes input tokens. It shows what lexemes (if any) are produced. To test the behavior of custom dictionary, use `ts_lexize` with the dictionary name and an integer string. ```sql CREATE TEXT SEARCH DICTIONARY intdict_for_testing ( TEMPLATE = intdict_template, MAXLEN = 3, REJECTLONG = false, ABSVAL = true ); ``` Now, test this dictionary with various integer inputs: ```sql SELECT ts_lexize('intdict_for_testing', '123'); -- Result: {123} SELECT ts_lexize('intdict_for_testing', '12345'); -- Result: {123} (truncated) SELECT ts_lexize('intdict_for_testing', '-98765'); -- Result: {987} (absval applied) SELECT ts_lexize('intdict_for_testing', '+12'); -- Result: {12} (absval applied) ``` Test with `rejectlong` set to `true`: ```sql ALTER TEXT SEARCH DICTIONARY intdict_for_testing ( REJECTLONG = true ); SELECT ts_lexize('intdict_for_testing', '1234567'); -- Result: {} (empty, rejected) SELECT ts_lexize('intdict_for_testing', '987'); -- Result: {987} (within limit) ``` ## Integrating `dict_int` into a text search configuration For a custom integer dictionary to be used during indexing and searching, it must be associated with specific token types in a text search configuration. **Important** Modifying default text search configurations on Neon: Altering default text search configurations (like `english`) requires superuser privileges on Neon. If you encounter an "ERROR: must be owner of text search configuration english", you will need to first **create a copy of an existing configuration** (e.g., `english`) and then modify your own copy. Here's the recommended approach: 1. Create your custom integer dictionary (if you haven't already): ```sql CREATE TEXT SEARCH DICTIONARY my_custom_intdict ( TEMPLATE = intdict_template, MAXLEN = 8, -- Example: Max 8 digits REJECTLONG = false, -- Example: Truncate long numbers ABSVAL = true -- Example: Ignore signs ); ``` 2. Create a copy of an existing text search configuration (e.g., `english`): ```sql CREATE TEXT SEARCH CONFIGURATION public.my_app_search_config (COPY = pg_catalog.english); ``` The above sql creates a new configuration named `my_app_search_config` that inherits the settings of the `english` configuration. 3. Alter the copied configuration to use custom dictionary for integer token types (`int` and `uint`): ```sql ALTER TEXT SEARCH CONFIGURATION public.my_app_search_config ALTER MAPPING FOR int, uint WITH my_custom_intdict; ``` Now, `public.my_app_search_config` is set up to use `my_custom_intdict` for processing integers. `public.my_app_search_config` can now be used in `to_tsvector` and `to_tsquery` functions to process integer tokens according to the rules defined in `my_custom_intdict`. ## Example Let's consider a scenario where we have a `documents` table with a `version_code` field stored as text. These codes can be like "v1", "V2.0", "Rev 003", or purely numeric like "1001", "005". We want to full-text search these, focusing on the numeric parts using a custom integer dictionary and a custom text search configuration. ### Sample table and data ```sql CREATE TABLE documents ( id SERIAL PRIMARY KEY, title TEXT, content TEXT, version_code TEXT ); INSERT INTO documents (title, content, version_code) VALUES ('Intro Guide', 'Content of version 1...', '1'), ('Advanced Manual', 'More content...', '0042'), ('Internal Spec', 'Spec details...', '7654321'), ('Internal Spec v2', 'Updated spec...', '+7654321'), ('Draft Notes', 'Preliminary ideas...', 'ver003'); ``` ### Create custom dictionary and text search configuration 1. Create a custom integer dictionary, `doc_version_intdict`, with `maxlen` set to 4, `rejectlong` to `true`, and `absval` to `true`. ```sql CREATE TEXT SEARCH DICTIONARY doc_version_intdict ( TEMPLATE = intdict_template, MAXLEN = 4, REJECTLONG = true, ABSVAL = true ); ``` 2. Create a copy of the `english` text search configuration, naming it `doc_search_config`. ```sql CREATE TEXT SEARCH CONFIGURATION public.doc_search_config (COPY = pg_catalog.english); ``` 3. Alter `doc_search_config` to use `doc_version_intdict` for integers. ```sql ALTER TEXT SEARCH CONFIGURATION public.doc_search_config ALTER MAPPING FOR int, uint WITH doc_version_intdict; ``` ### Add `tsvector` column and index data Index the `version_code` column using the custom configuration. First, add a `tsvector` column to the `documents` table: ```sql ALTER TABLE documents ADD COLUMN version_tsv TSVECTOR; UPDATE documents SET version_tsv = to_tsvector('public.doc_search_config', version_code); -- Use custom config ``` ### Examine the indexed tokens To see how the `version_code` values are indexed, you can query the `documents` table: ```sql SELECT id, title, version_code, version_tsv FROM documents; ``` ```text id | title | version_code | version_tsv ----+------------------+--------------+------------- 1 | Intro Guide | 1 | '1':1 2 | Advanced Manual | 0042 | '0042':1 3 | Internal Spec | 7654321 | 4 | Internal Spec v2 | +7654321 | 5 | Draft Notes | ver003 | 'ver003':1 (5 rows) ``` In this example: - The `version_code` "1" is indexed as `'1':1`. - The `version_code` "0042" is indexed as `'0042':1`. - The long version code "7654321" and "+7654321" are not indexed at all due to `maxlen` and `rejectlong`. - The version code "ver003" is indexed as `'ver003':1` because it doesn't exceed the `maxlen` and is not purely numeric. ### Searching Using the custom configuration, you can now search for specific version codes: ```sql -- Find documents with version code '0042' SELECT title, version_code FROM documents WHERE version_tsv @@ to_tsquery('public.doc_search_config', '0042'); -- (Advanced Manual, '0042') -- Try to find the long version code SELECT title, version_code FROM documents WHERE version_tsv @@ to_tsquery('public.doc_search_config', '7654321'); -- null (Expected: No results, as it was rejected) ``` ## Limitations - **Integer-specific:** `dict_int` is designed for whole numbers (integers). It does not process floating-point numbers (e.g., `3.14159`). Standard FTS tokenizers will handle floating-point numbers, but `dict_int`'s logic won't apply to them. - **Text representation:** It operates on the textual representation of numbers as tokenized by the FTS parser. If your column is of type `INTEGER` and you cast it to `TEXT` for `to_tsvector`, `dict_int` will then process that text. ## Conclusion The `dict_int` dictionary template is a valuable tool in Postgres for fine-tuning how integer values are handled in Full-Text Search. By customizing the way integers are indexed, you can achieve several benefits: - **Reduced index size:** Custom integer dictionaries help prevent the proliferation of unique numeric lexemes by truncating or rejecting overly long numbers and normalizing signed ones. This keeps your FTS indexes smaller and more manageable. - **Improved search performance:** As a general rule, smaller, more optimized indexes lead to faster search query execution. - **More relevant search results:** By tailoring how numbers are processed, you can ensure that searches for numeric data are more aligned with user expectations and less susceptible to noise from irrelevant number formats. ## Resources - [PostgreSQL `dict_int` documentation](https://www.postgresql.org/docs/current/dict-int.html) - [Dictionary Testing with `ts_lexize`](https://www.postgresql.org/docs/current/textsearch-debugging.html#TEXTSEARCH-DICTIONARY-TESTING) - [PostgreSQL Full Text Search](https://neon.com/postgresql/postgresql-indexes/postgresql-full-text-search) - [Full Text Search using tsvector with Neon Postgres](https://neon.com/guides/full-text-search) - [Postgres tsvector data type](https://neon.com/docs/data-types/tsvector) --- # Source: https://neon.com/llms/extensions-earthdistance.txt # The earthdistance extension > The document details the installation and usage of the earthdistance extension in Neon, enabling users to calculate great-circle distances between two points on Earth using latitude and longitude coordinates. ## Source - [The earthdistance extension HTML](https://neon.com/docs/extensions/earthdistance): The original HTML version of this documentation The `earthdistance` extension for Postgres provides functions to calculate great-circle distances between points on the Earth's surface. This is essential for applications requiring geospatial distance calculations, such as location-based services, mapping applications, logistics, and any system that needs to find nearby points or calculate travel distances. **Important** Accuracy and assumptions: The `earthdistance` extension primarily assumes a spherical Earth model for its calculations, which provides good approximations for many use cases. It relies on the [`cube`](https://neon.com/docs/extensions/cube) extension for some of its underlying operations. You may consider using the [`postgis` extension](https://neon.com/docs/extensions/postgis) if accurate geospatial calculations are critical for your application. ## Enable the `earthdistance` extension To use `earthdistance`, you first need to enable it and its dependency, the [`cube` extension](https://neon.com/docs/extensions/cube). You can do this by running the following `CREATE EXTENSION` statements in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client like [psql](https://neon.com/docs/connect/query-with-psql-editor): ```sql CREATE EXTENSION IF NOT EXISTS cube; CREATE EXTENSION IF NOT EXISTS earthdistance; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Core concepts The `earthdistance` extension offers two main ways to represent geographic points and calculate distances: 1. **Using the `earth` type:** This approach involves converting latitude and longitude coordinates into a special `earth` data type (which is a domain over `cube`, representing a point in 3D Cartesian coordinates based on a spherical earth model). Distances are calculated in meters. 2. **Using the native `point` type:** This approach uses the built-in `point` type in Postgres, where the first component is longitude and the second is latitude. It provides a specific operator for distance calculation, which returns results in statute miles. ### The `earth` data type and associated functions - `earth` data type Represents a point on the Earth's surface. It's internally a `cube` point representing a 3D Cartesian coordinate. You don't usually interact with its internal representation directly but use helper functions. - `ll_to_earth(latitude double precision, longitude double precision)` returns `earth` Converts latitude and longitude (in degrees) to an `earth` data type value. - `earth_distance(p1 earth, p2 earth)` returns double precision Calculates the great-circle distance in **meters** between two `earth` points. ```sql -- Distance between London and Paris SELECT earth_distance( ll_to_earth(51.5074, -0.1278), -- London ll_to_earth(48.8566, 2.3522) -- Paris ) AS distance_meters; -- Output: 343942.5946120387 ``` - `earth_box(location earth, radius_meters double precision)` returns `cube` Computes a bounding box (as a `cube` type) that encloses all points within the specified `radius_meters` from the given `location`. This is primarily used for optimizing radius searches with [GiST indexes](https://neon.com/postgresql/postgresql-indexes/postgresql-index-types#gist-indexes). ```sql -- Create a bounding box for a 10km radius around London SELECT earth_box(ll_to_earth(51.5074, -0.1278), 10000) AS search_box; ``` When used in queries, you typically use the `<@` operator from the `cube` extension. The `<@` operator means "is contained by". The expression `ll_to_earth(lat, lon) <@ earth_box(center_point_earth, search_radius_meters)` checks if the specific geographic point (represented as an `earth` type, which is a `cube` point) is contained within the square bounding `earth_box` (also a `cube`). For instance, if `point_A` is `ll_to_earth(51.5, -0.1)` (a point in London) and `london_box` is `earth_box(ll_to_earth(51.5074, -0.1278), 10000)`, then `point_A <@ london_box` would be `true`. ```sql SELECT ll_to_earth(51.5, -0.1) <@ earth_box(ll_to_earth(51.5074, -0.1278), 10000) AS is_within_box; -- Output: true ``` ### Using the `point` data type - `point` data type A built-in Postgres type representing a 2D point in Cartesian coordinates. In the context of `earthdistance`, the first component is longitude and the second is latitude. - `point1 <@> point2` returns double precision Calculates the great-circle distance in **statute miles** between two points. ```sql -- Distance between San Francisco (-122.4194 lon, 37.7749 lat) -- and New York (-74.0060 lon, 40.7128 lat) SELECT point '(-122.4194, 37.7749)' <@> point '(-74.0060, 40.7128)' AS distance_miles; -- Output: 2565.6899113306895 ``` ## Example usage Now that we've seen the core functions, let's create and populate a sample table to demonstrate practical usage scenarios. This table will store location data with latitude and longitude. ```sql CREATE TABLE locations ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, latitude DOUBLE PRECISION NOT NULL, longitude DOUBLE PRECISION NOT NULL ); INSERT INTO locations (name, latitude, longitude) VALUES ('San Francisco', 37.7749, -122.4194), ('New York', 40.7128, -74.0060), ('Los Angeles', 34.0522, -118.2437), ('Chicago', 41.8781, -87.6298), ('London', 51.5074, -0.1278), ('Tokyo', 35.6895, 139.6917), ('Sydney', -33.8688, 151.2093); ``` ## Practical usage scenarios With our sample `locations` table, we can now explore common geospatial queries. ### Calculating distance between two specific points Using `ll_to_earth()` and `earth_distance()`: ```sql SELECT a.name AS location_a, b.name AS location_b, earth_distance( ll_to_earth(a.latitude, a.longitude), ll_to_earth(b.latitude, b.longitude) ) AS distance_meters FROM locations a, locations b WHERE a.name = 'San Francisco' AND b.name = 'New York'; ``` Output: ```text | location_a | location_b | distance_meters | |---------------|------------|---------------------| | San Francisco | New York | 4133731.792059527 | ``` ### Finding locations within a given radius Find all locations within 8000 kilometers of London using the `earth` type functions. ```sql SELECT name, earth_distance( ll_to_earth(latitude, longitude), ll_to_earth(51.5074, -0.1278) -- London's coordinates ) / 1000.0 AS distance_from_london_km -- Convert meters to km FROM locations WHERE earth_distance( ll_to_earth(latitude, longitude), ll_to_earth(51.5074, -0.1278) ) < 8000 * 1000 -- Radius in meters ORDER BY distance_from_london_km; ``` Output: ```text | name | distance_from_london_km | |---------------|-------------------------| | London | 0.0 | | New York | 5576.4892261332425 | | Chicago | 6360.125481207209 | ``` ## Indexing for performance For applications with many locations that require frequent radius searches or nearest-neighbor queries, indexing is crucial. GiST indexes are used with the `earth` type functions (`ll_to_earth`, `earth_box`). 1. **Create a GiST index on the `earth` representation of your coordinates:** This index will be on the result of the `ll_to_earth()` function applied to your latitude and longitude columns. ```sql CREATE INDEX locations_earth_coords_idx ON locations USING GIST (ll_to_earth(latitude, longitude)); ``` 2. **Perform an indexed radius search:** Let's find locations within 1000 km of San Francisco `(37.7749° N, -122.4194° W)`. ```sql SELECT name, earth_distance( ll_to_earth(latitude, longitude), ll_to_earth(37.7749, -122.4194) ) / 1000.0 AS distance_from_sf_km FROM locations WHERE -- This part uses the GiST index for a fast coarse filter ll_to_earth(latitude, longitude) <@ earth_box(ll_to_earth(37.7749, -122.4194), 1000 * 1000) -- Radius in meters -- This part is the exact distance check for refinement (necessary as earth_box is a square) AND earth_distance( ll_to_earth(latitude, longitude), ll_to_earth(37.7749, -122.4194) ) < 1000 * 1000 -- Radius in meters ORDER BY distance_from_sf_km; ``` **Explanation of the indexed query:** - The `ll_to_earth(latitude, longitude) <@ earth_box(...)` condition uses the GiST index. The `earth_box` function creates a square bounding box. The index quickly finds points whose `earth` representation falls within this box. - The second condition, `earth_distance(...) < radius`, is crucial. It performs the precise great-circle distance calculation for the candidate rows selected by the index, filtering them to the exact circular radius. This is because the `earth_box` provides a rough filter, and the `earth_distance` provides the exact filter. ## Conclusion The `earthdistance` extension is a powerful and convenient tool in Postgres for applications dealing with geographic locations. It simplifies the calculation of great-circle distances, enabling features like location-based searching and distance filtering directly within your database. By understanding its core functions, data representations, and how to leverage GiST indexing, you can build efficient and effective geospatial queries. ## Resources - PostgreSQL official documentation: - [earthdistance](https://www.postgresql.org/docs/current/earthdistance.html) - [cube](https://www.postgresql.org/docs/current/cube.html) - [point](https://www.postgresql.org/docs/current/datatype-geometric.html#DATATYPE-GEOMETRIC-POINTS) - [Cube extension](https://neon.com/docs/extensions/cube) - [Greater-circle distance](https://en.wikipedia.org/wiki/Great-circle_distance) --- # Source: https://neon.com/llms/extensions-fuzzystrmatch.txt # The fuzzystrmatch extension > The document details the installation and usage of the fuzzystrmatch extension in Neon, enabling users to perform string similarity and phonetic matching operations within their databases. ## Source - [The fuzzystrmatch extension HTML](https://neon.com/docs/extensions/fuzzystrmatch): The original HTML version of this documentation The `fuzzystrmatch` extension for Postgres provides a suite of functions to determine similarities and distances between strings. This is useful for applications that need to handle misspellings, phonetic variations, or simply find "close enough" matches in text data. Whether you're building a search engine, cleaning customer data, or trying to identify duplicate records, `fuzzystrmatch` offers powerful tools to compare strings beyond exact equality. Imagine a user searching for "John Doe" but typing "Jon Dow", or needing to match "Smith" with "Smythe". `fuzzystrmatch` provides algorithms like [Soundex](https://en.wikipedia.org/wiki/Soundex), [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance), [Metaphone](https://en.wikipedia.org/wiki/Metaphone), and [Daitch-Mokotoff Soundex](https://en.wikipedia.org/wiki/Daitch%E2%80%93Mokotoff_Soundex) to tackle these challenges. ## Enable the `fuzzystrmatch` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS fuzzystrmatch; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Core functions and usage `fuzzystrmatch` offers several algorithms, each with its strengths for different types of string comparisons. ### 1. Soundex The Soundex system is a phonetic algorithm for indexing names by sound, as pronounced in English. It converts a string into a four-character code, where the first character is the first letter of the string, and the remaining three digits encode the consonants. **Functions:** - `soundex(text)` returns `text`: Computes the Soundex code of a string. - `difference(text, text)` returns `int`: Computes the difference between the Soundex codes of two strings. The result ranges from 0 (no match) to 4 (exact match on Soundex codes). **Examples:** The `soundex` function generates the phonetic code, and `difference` measures how similar these codes are. For instance, names that sound similar often share Soundex codes or have very similar ones: - Pairs like ("Smith"/"Smythe") and ("John"/"Jon") yield the same Soundex codes (S530 and J500 respectively), indicating they sound very similar. The `difference` function confirms this with a score of 4 (an exact match on the Soundex code). - Similarly, ("Robert"/"Rupert") both produce the Soundex code R163 and thus also have a `difference` score of 4. - In contrast, a pair like ("Anne"/"Andrew") yields different Soundex codes (A500 vs A536) and a `difference` score of 2, reflecting a lesser degree of phonetic similarity according to Soundex. Let's see these in action with SQL: ```sql SELECT soundex('Smith'), soundex('Smythe'); -- s530, s530 SELECT difference('Smith', 'Smythe'); -- 4 SELECT soundex('John'), soundex('Jon'); -- J500, J500 SELECT difference('John', 'Jon'); -- 4 SELECT soundex('Robert'), soundex('Rupert'); -- R163, R163 SELECT difference('Anne', 'Andrew'); -- 2 (A500 vs A536) ``` **Use case:** Useful for matching English names that sound similar but are spelled differently. Note that Soundex is not very effective for non-English names. ### 2. Levenshtein distance The Levenshtein distance measures the similarity between two strings by counting the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one string into the other. A lower distance indicates greater similarity. **Functions:** - `levenshtein(source text, target text)` returns `int`: Calculates Levenshtein distance with a cost of 1 for each insertion, deletion, or substitution. - `levenshtein(source text, target text, ins_cost int, del_cost int, sub_cost int)` returns `int`: Calculates Levenshtein distance with specified costs for operations. - `levenshtein_less_equal(source text, target text, max_d int)` returns `int`: An accelerated version. If the actual distance is less than or equal to `max_d`, it returns the correct distance; otherwise, it returns a value greater than `max_d`. This is faster if you only care about small distances. - `levenshtein_less_equal(source text, target text, ins_cost int, del_cost int, sub_cost int, max_d int)` returns `int`: Accelerated version with custom costs. Both `source` and `target` strings can be any non-null strings and be up to 255 characters long. **Examples:** The Levenshtein distance quantifies the "edit effort" between strings. Consider these transformations: 1. To change "kitten" to "sitting": - Substitute 'k' with 's' (kitten -> sitten) - Substitute 'e' with 'i' (sitten -> sittin) - Insert 'g' at the end (sittin -> sitting) This requires 3 edits, so the Levenshtein distance is 3. 2. To change "apple" to "apply": - Substitute 'e' with 'y' (apple -> apply) This is 1 edit, giving a distance of 1. 3. If comparing "book" and "back": - Substitute 'o' with 'a' (book -> baok) - Substitute 'o' with 'c' (baok -> back) This requires 2 edits, resulting in a distance of 2. The function can also take custom costs for insertion, deletion, and substitution, which can be useful for domain-specific needs. Let's see these in action with SQL: ```sql -- kitten to sitting (default costs: 1 for ins, del, sub) SELECT levenshtein('kitten', 'sitting'); -- 3 -- apple to apply SELECT levenshtein('apple', 'apply'); -- 1 -- book to back SELECT levenshtein('book', 'back'); -- 2 -- Levenshtein distance is case-sensitive SELECT levenshtein('book', 'Book'); -- 1 -- Example with custom costs: 1 for ins, 2 for del, 3 for sub SELECT levenshtein('book', 'back', 1, 2, 3); -- 6 -- Few possible paths of minimum cost: -- 2 substitutions(o -> a, o -> c): cost = 2 x 3 (for substitution) = 6) -- delete 2 o's and insert a c: cost = 2 x 2 (for deletion) + 2 x 1 (for insertion) = 6 -- Using levenshtein_less_equal for efficiency when only small distances matter SELECT levenshtein_less_equal('banana', 'bandana', 1); -- Returns 1 (correct, as only one insertion 'd' is needed) SELECT levenshtein_less_equal('longstringexample', 'short', 2); -- Returns a value > 2 (actual distance is much higher, so it stops early) ``` **Use case:** Excellent for general typo correction, finding strings with minor differences, and when character-level edit distance is important. Works well with various languages, including those with multibyte encodings. Remember that Levenshtein is case-sensitive. ### 3. Metaphone and double metaphone Metaphone algorithms, like Soundex, generate phonetic codes for strings. They are generally more accurate than Soundex for English words. Double metaphone provides primary and alternate encodings, offering better support for non-English words. **Functions:** - `metaphone(text, max_output_length int)` returns `text`: Computes the metaphone code for a string, up to a specified maximum length. - `dmetaphone(text)` returns `text`: Computes the primary Double metaphone code. - `dmetaphone_alt(text)` returns `text`: Computes the alternate Double metaphone code (returns `NULL` if no alternate exists). **Examples:** ```sql SELECT metaphone('Michael', 8); -- MXL SELECT metaphone('algorithm', 10); -- ALKR0M SELECT dmetaphone('Smith'), dmetaphone_alt('Smith'); -- SM0, XMT SELECT dmetaphone('Schmidt'); -- XMT SELECT dmetaphone_alt('Schmidt'); -- SMT -- Primary and alternate for a name with multiple pronunciations SELECT dmetaphone('Joan'), dmetaphone_alt('Joan'); -- Spanish 'Joan Miró' -- JN, AN ``` **Use case:** Good for matching English words phonetically. Double Metaphone is an improvement, especially with its alternate codes for handling variations in pronunciation and non-English names. ### 4. Daitch-Mokotoff Soundex Daitch-Mokotoff (DM) Soundex is another phonetic algorithm, significantly more useful for non-English names than the original Soundex. **Key improvements over original Soundex:** - Codes are based on the first six meaningful letters (not four). - Maps letters/combinations to ten possible codes (not seven). - Multiple codes can be emitted if a letter/combination has different sounds. **Function:** `daitch_mokotoff(source text) returns text[]`: Generates an array of Daitch-Mokotoff Soundex codes for the input string. The result is an array because a name can have multiple plausible pronunciations. DM codes are 6 digits long. `source` should preferably be a single word or name. **Examples:** ```sql SELECT daitch_mokotoff('George'); -- {595000} SELECT daitch_mokotoff('John'); -- {160000,460000} (Reflects 'J' vs 'Y' sound possibilities) SELECT daitch_mokotoff('Bierschbach'); -- {794575,794574,794750,794740,745750,745740,747500,747400} ``` **Matching Daitch-Mokotoff codes:** Since `daitch_mokotoff` returns an array, you can use the array overlap operator `&&` for matching: ```sql CREATE TABLE surnames (name TEXT); INSERT INTO surnames VALUES ('Peterson'), ('Petersen'), ('Pietersen'); SELECT name FROM surnames WHERE daitch_mokotoff(name) && daitch_mokotoff('Petterson'); ``` ``` name ----------- Peterson Petersen Pietersen (3 rows) ``` **Use case:** Best for phonetic matching of European names, particularly when Soundex is insufficient. Works with multibyte encodings. ## Practical usage scenarios Let's see how these functions can be applied in common scenarios. ### Finding misspelled names in a customer database Suppose you have a `customers` table and want to find customers whose names are similar to "Jon Smithe". ```sql CREATE TABLE customers (id INT, name TEXT); INSERT INTO customers VALUES (1, 'John Smith'), (2, 'Jon Smythe'), (3, 'Jane Doe'), (4, 'Jonathan Smithson'); -- Using Levenshtein distance SELECT * FROM customers WHERE levenshtein(lower(name), lower('Jon Smithe')) <= 3; ``` ``` id | name ----+----------------- 1 | John Smith 2 | Jon Smythe (2 rows) ``` ```sql -- Using Soundex difference SELECT * FROM customers WHERE difference(name, 'Jon Smithe') >= 3; ``` ``` id | name ----+------------ 1 | John Smith 2 | Jon Smythe 4 | Jonathan Smithson (3 rows) ``` ### Typo correction in search input It can be useful to suggest corrections for user input. For example, if a user types "Portgeasql", you can suggest "PostgreSQL": ```sql WITH potential_matches AS ( SELECT 'PostgreSQL' AS term UNION ALL SELECT 'MySQL' UNION ALL SELECT 'SQLite' ) SELECT term, levenshtein(lower(term), 'portgeasql') AS distance FROM potential_matches ORDER BY distance LIMIT 1; ``` ``` term | distance -----------+---------- PostgreSQL | 3 (1 row) ``` Let's say the user types "sequelite" instead of "SQLite": ```sql WITH potential_matches AS ( SELECT 'PostgreSQL' AS term UNION ALL SELECT 'MySQL' UNION ALL SELECT 'SQLite' ) SELECT term, levenshtein(lower(term), 'sequelite') AS distance FROM potential_matches ORDER BY distance LIMIT 1; ``` ``` term | distance -----------+---------- SQLite | 3 (1 row) ``` The values can then be used to suggest corrections or alternatives to the user. ## Limitations and considerations - **Multibyte encodings:** `soundex`, `metaphone`, `dmetaphone`, and `dmetaphone_alt` are not reliable for UTF-8 or other multibyte encodings. Use `daitch_mokotoff` or `levenshtein` for such cases. - **Phonetic nuances:** Phonetic algorithms simplify pronunciation. They might not always align perfectly with intended pronunciations or capture all linguistic subtleties, potentially leading to false positives or negatives. - **Computational cost:** Levenshtein distance can be resource-intensive on large datasets without `levenshtein_less_equal` or proper indexing strategies. ## Conclusion The `fuzzystrmatch` extension is useful for tackling the common problem of inexact string matching. By understanding the strengths and weaknesses of functions like `soundex`, `levenshtein`, `metaphone`, `dmetaphone`, and `daitch_mokotoff`, you can enhance your application's ability to handle typos, phonetic variations, and find similar text data effectively. Always consider the nature of your data (especially language and encoding) and your specific matching requirements when choosing the right function. ## Resources - [PostgreSQL `fuzzystrmatch` documentation](https://www.postgresql.org/docs/current/fuzzystrmatch.html) - Algorithms: - [Soundex](https://en.wikipedia.org/wiki/Soundex) - [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) - [Metaphone](https://en.wikipedia.org/wiki/Metaphone) - [Daitch-Mokotoff Soundex](https://en.wikipedia.org/wiki/Daitch%E2%80%93Mokotoff_Soundex) --- # Source: https://neon.com/llms/extensions-hstore.txt # The hstore extension > The document details the hstore extension for Neon, enabling users to store sets of key-value pairs within a single PostgreSQL value, facilitating semi-structured data management. ## Source - [The hstore extension HTML](https://neon.com/docs/extensions/hstore): The original HTML version of this documentation The `hstore` extension is a flexible way to store and manipulate sets of key-value pairs within a single Postgres value. It is particularly useful for semi-structured data or data that does not have a rigid schema. This guide covers the basics of the `hstore` extension - how to enable it, how to store and query key-value pairs, and perform operations on hstore data with examples. `hstore` is valuable in scenarios where schema-less data needs to be stored efficiently, such as in configurations, application settings, or any situation where the data structure may evolve over time. **Note**: `hstore` is an open-source extension for Postgres that can be installed on any compatible Postgres instance. Detailed installation instructions and compatibility information can be found at [PostgreSQL Extensions](https://www.postgresql.org/docs/current/contrib.html). **Version availability** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date information. ## Enable the `hstore` extension Enable the extension by running the following SQL statement in your Postgres client: ```sql CREATE EXTENSION IF NOT EXISTS hstore; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Example usage **Creating a table with hstore column** Consider a table that stores the product catalog for an electronics shop. Each product has a name and a set of attributes that describe it. The attributes for each product are not fixed and may change over time. This makes `hstore` a good choice for storing this data. ```sql CREATE TABLE product ( id SERIAL PRIMARY KEY, name VARCHAR(255), attributes HSTORE ); ``` **Inserting data** Inserting data into an `hstore` column is done by providing a string containing key-value pairs into the column. ```sql INSERT INTO product (name, attributes) VALUES ('Desktop', 'brand => HP, price => 900, processor => "Intel Core i5", storage => "1TB HDD"'), ('Tablet', 'brand => Apple, price => 500, os => iOS, screen_size => 10.5'), ('Smartwatch', 'brand => Garmin, price => 250, water_resistant => true, battery_life => "7 days"'), ('Camera', 'brand => Nikon, price => 1200, megapixels => 24, video_resolution => "4K"'), ('Laptop', 'brand => Dell, price => 1200, screen_size => 15.6'), ('Smartphone', 'brand => Samsung, price => 800, os => Android'), ('Headphones', 'brand => Sony, price => 150, wireless => true, color => "Black"'); ``` `hstore` stores both keys and values for each record as strings (values can be nulls). For numeric attributes like price and megapixels, they are cast to strings when inserted into the table. **Querying `hstore` data** `hstore` columns can be referenced as regular columns in a query. To access the attributes in an `hstore` column, we use the `->` operator. For example, to retrieve the name and brand for all products with price less than 1000, we can run the following query: ```sql SELECT name, attributes->'brand' AS brand FROM product WHERE (attributes->'price')::INT < 1000; ``` Since the `price` attribute is stored as a string, we need to cast it to an integer before comparing it to 1000. This query returns the following: ```text | name | brand | |------------|---------| | Desktop | HP | | Tablet | Apple | | Smartwatch | Garmin | | Smartphone | Samsung | | Headphones | Sony | ``` ## Operators for `hstore` data `hstore` offers a variety of operators for manipulating and querying key-value pairs. We go over some examples below. **Check if a key exists** The `?` operator is used to check if an `hstore` contains a specific key. ```sql SELECT id, name FROM product WHERE attributes ? 'os'; ``` This query returns the following: ```text | id | name | |----|------------| | 2 | Tablet | | 6 | Smartphone | ``` **Check if an hstore contains another hstore** The `@>` operator is used to check if the `hstore` on the left contains the right operand. For example, the query below looks for products that have a `brand` attribute of `Apple`. ```sql SELECT id, name FROM product WHERE attributes @> 'brand => "Apple"'; ``` This query returns the following: ```text | id | name | |----|--------| | 2 | Tablet | ``` **Concatenating two hstore values** The `||` operator is used to concatenate two `hstore` values. For example, the query below updates the attributes for the product with name `Laptop`. ```sql UPDATE product SET attributes = attributes || 'weight => 2.5' WHERE name = 'Laptop' AND attributes -> 'brand' = 'Dell'; ``` To verify, we can run the query below. ```sql SELECT id, name, attributes -> 'weight' AS weight FROM product WHERE name = 'Laptop' AND attributes -> 'brand' = 'Dell'; ``` This query returns the following: ```text | id | name | weight | |----|--------|--------| | 5 | Laptop | 2.5 | ``` **Check if a hstore contains any of the specified keys** The `?|` operator is used to check if an `hstore` contains any of the keys specified in the right operand. For example, the query below returns all products that have either a `screen_size` or `megapixels` attribute. ```sql SELECT id, name FROM product WHERE attributes ?| ARRAY['screen_size', 'megapixels']; ``` This query returns the following: ```text | id | name | |----|--------| | 2 | Tablet | | 4 | Camera | | 5 | Laptop | ``` ## `Hstore` functions The `hstore` extension also adds functions to manipulate the `hstore` data. We go over some examples below. **Retrieve all keys** The `akeys` function returns an array of all the keys in an `hstore` value. For example, the query below returns all the keys for Dell laptop products. ```sql SELECT id, name, akeys(attributes) AS keys FROM product WHERE name = 'Laptop' AND attributes -> 'brand' = 'Dell'; ``` This query returns the following: ```text | id | name | keys | |----|--------|----------------------------------| | 1 | Laptop | {brand,price,weight,screen_size} | ``` **Convert hstore to JSON** The `hstore_to_json` function converts an `hstore` value to `JSON`. For example, the query below converts the `attributes` column to `JSON` for all products with a `brand` attribute of `Apple`. ```sql SELECT hstore_to_json(attributes) AS attributes FROM product WHERE attributes -> 'brand' = 'Apple'; ``` **Extract all keys and values** The `each` function returns the set of key-value pairs for an `hstore` value. For example, the query below returns each attribute of the Nikon Camera as a separate row. ```sql SELECT id, (each(attributes)).* FROM product WHERE name = 'Camera' AND attributes -> 'brand' = 'Nikon'; ``` This query returns the following: ```text | id | key | value | |----|------------------|-------| | 1 | brand | Nikon | | 2 | price | 1200 | | 3 | megapixels | 24 | | 4 | video_resolution | 4K | ``` ## Comparing `hstore` with `JSON` The `hstore` and `JSON` data types can be both used to store semi-structured data. `Hstore` has a flat data model — both keys and values must be strings. This makes it more efficient for simple key-value data. In constrast, `JSON` supports a variety of data types, and can also store nested data structures. This makes it more flexible, but trades off some performance. ## Indexing and performance Indexing can improve the performance of queries involving `hstore` data, particularly for large datasets. `Hstore` supports the regular `btree` and `hash` indexes. However, this is only useful for equality comparisons of the entire `hstore` value, since these indexes have no knowledge of its substructure. ```sql CREATE INDEX btree_idx_attributes ON product USING hash (attributes); ``` For queries that involve key-level filtering, like the `@>` or the `?` operators, the `GIN` and `GIST` indexes are more useful. The indexes can be created as shown in this example: ```sql CREATE INDEX gin_idx_attributes ON product USING gin (attributes); ``` ## Conclusion The `hstore` extension offers a powerful and flexible way to handle semi-structured data in Postgres. This guide provides an overview of using `hstore`, including creating records and querying on its attributes. It also covers some of the common operators and functions available for `hstore` data. ## Resources - [PostgreSQL hstore documentation](https://www.postgresql.org/docs/current/hstore.html) --- # Source: https://neon.com/llms/extensions-intarray.txt # The intarray extension > The document details the intarray extension for Neon, which enhances PostgreSQL by enabling operations on integer arrays, such as searching, sorting, and performing set operations, to optimize database performance and functionality. ## Source - [The intarray extension HTML](https://neon.com/docs/extensions/intarray): The original HTML version of this documentation The `intarray` extension for Postgres provides functions and operators for handling arrays of integers. It's particularly optimized for arrays that do not contain any `NULL` values, offering significant performance advantages for certain operations compared to Postgres's built-in array functions. This extension is useful when you need to perform set-like operations (unions, intersections), check for containment or overlap, or conduct indexed searches on integer arrays, common in applications like tagging systems, access control lists, or product categorization. ## Enable the `intarray` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS intarray; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## `intarray` functions The `intarray` extension provides several useful functions for array manipulation: - `icount(integer[]) → integer`: Returns the number of elements in the array. ```sql SELECT icount('{1,2,3,2}'::integer[]); -- Result: 4 ``` - `sort(integer[], dir text) → integer[]`: Sorts the array. `dir` can be 'asc' (ascending) or 'desc' (descending). ```sql SELECT sort('{1,3,2}'::integer[], 'desc'); -- Result: {3,2,1} ``` - `sort_asc(integer[]) → integer[]`: Sorts the array in ascending order. (Equivalent to `sort(arr, 'asc')`). ```sql SELECT sort_asc('{11,77,44}'::integer[]); -- Result: {11,44,77} ``` - `sort_desc(integer[]) → integer[]`: Sorts the array in descending order. (Equivalent to `sort(arr, 'desc')`). ```sql SELECT sort_desc('{11,77,44}'::integer[]); -- Result: {77,44,11} ``` - `uniq(integer[]) → integer[]`: Removes _adjacent_ duplicate values from the array. To remove all duplicates, sort the array first. ```sql SELECT uniq('{1,2,2,3,1,1}'::integer[]); -- Result: {1,2,3,1} SELECT uniq(sort('{1,2,2,3,1,1}'::integer[])); -- Result: {1,2,3} ``` - `idx(integer[], item integer) → integer`: Returns the 1-based index of the first occurrence of `item` in the array, or 0 if not found. ```sql SELECT idx(array[11,22,33,22,11], 22); -- Result: 2 ``` - `subarray(integer[], start_idx integer, len integer) → integer[]`: Extracts a subarray of `len` elements starting from `start_idx` (1-based). ```sql SELECT subarray('{1,2,3,4,5}'::integer[], 2, 3); -- Result: {2,3,4} ``` - `subarray(integer[], start_idx integer) → integer[]`: Extracts a subarray from `start_idx` to the end of the array. ```sql SELECT subarray('{1,2,3,4,5}'::integer[], 3); -- Result: {3,4,5} ``` - `intset(integer) → integer[]`: Creates a single-element integer array. ```sql SELECT intset(42); -- Result: {42} ``` ## `intarray` Operators `intarray` offers set of operators for comparing and manipulating integer arrays: | Operator | Description | Example | Result | | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ----------- | | `&&` | Overlap: Do arrays have at least one element in common? | `'{1,2,3}'::int[] && '{3,4,5}'::int[]` | `true` | | `@>` | Contains: Does the left array contain all elements of the right array? | `'{1,2,3,4}'::int[] @> '{2,3}'::int[]` | `true` | | `<@` | Is contained by: Is the left array contained within the right array? | `'{2,3}'::int[] <@ '{1,2,3,4}'::int[]` | `true` | | `+ integer` | Add element: Adds an integer to the end of the array. | `'{1,2}'::int[] + 3` | `{1,2,3}` | | `+ integer[]` | Concatenate arrays. | `'{1,2}'::int[] + '{3,4}'::int[]` | `{1,2,3,4}` | | `- integer` | Remove element: Removes all occurrences of the integer from the array. | `'{1,2,3,2}'::int[] - 2` | `{1,3}` | | `- integer[]` | Remove elements: Removes elements of the right array from the left array. | `'{1,2,3,4}'::int[] - '{2,4,5}'::int[]` | `{1,3}` | | `\|` | Union: Computes the union of the two arrays (duplicate elements included unless arrays are pre-sorted and uniqued). For set union, consider `uniq(sort(array1 + array2))`. | `'{1,2}'::int[] \| '{2,3}'::int[]` | `{1,2,2,3}` | | `&` | Intersection: Computes the intersection of the two arrays (order and duplicates depend on input). | `'{1,2,3}'::int[] & '{2,3,4}'::int[]` | `{2,3}` | | `#` (prefix) | Number of elements: (Same as `icount` function). | `#'{1,2,3,4}'::int[]` | `4` | | `#` (infix) | Index of element in 1-based indexing (Same as `idx` function). | `'{10,20,30}'::int[] # 20` | `2` | ### `query_int` operators `intarray` introduces a special data type `query_int` for constructing complex search queries against integer arrays. - `array @@ query_int → boolean`: Does the array satisfy the `query_int`? - `query_int ~~ array → boolean`: Commutator for `@@`. Does the array satisfy the `query_int`? A `query_int` consists of integer values combined with operators: - `&` (AND) - `|` (OR) - `!` (NOT) Parentheses `()` can be used for grouping. Example: `1&(2|3)` matches arrays that contain `1` AND (either `2` OR `3`). ```sql SELECT '{1,2,7}'::integer[] @@ '1 & (2|3)'::query_int; -- true (1 is present, 2 is present) SELECT '{1,3,8}'::integer[] @@ '1 & (2|3)'::query_int; -- true (1 is present, 3 is present) SELECT '1 & (2|3)'::query_int ~~ '{1,3,8}'::integer[]; -- commutator version of the above SELECT '{1,4,9}'::integer[] @@ '1 & (2|3)'::query_int; -- false (1 is present, but neither 2 nor 3) SELECT '{2,3,5}'::integer[] @@ '1 & (2|3)'::query_int; -- false (1 is not present) ``` ## Example usage Let's create a table to store an example dataset of articles with tags represented as integer arrays. ```sql CREATE TABLE articles ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, tag_ids INTEGER[] -- This will store an array of integer tag IDs ); INSERT INTO articles (title, tag_ids) VALUES ('Postgres Performance Tips', '{1,2,3}'), ('Introduction to SQL', '{2,4}'), ('Advanced intarray Usage', '{1,3,5}'), ('Database Normalization', '{4,6}'); ``` ### Basic set operations Find articles tagged with either tag 2 `OR` tag 5 (overlap): ```sql SELECT title, tag_ids FROM articles WHERE tag_ids && '{2,5}'::integer[]; ``` Output: ```text | title | tag_ids | |---------------------------|---------| | Postgres Performance Tips | {1,2,3} | | Introduction to SQL | {2,4} | | Advanced intarray Usage | {1,3,5} | ``` Find articles tagged with both tag `1` AND tag `2` (contains): ```sql SELECT title, tag_ids FROM articles WHERE tag_ids @> '{1,2}'::integer[]; ``` Output: ```text | title | tag_ids | |---------------------------|---------| | Postgres Performance Tips | {1,2,3} | ``` Find articles whose tags are fully contained within `{1,2,3,5}` (is contained by): ```sql SELECT title, tag_ids FROM articles WHERE tag_ids <@ '{1,2,3,5}'::integer[]; ``` Output: ```text | title | tag_ids | |---------------------------|---------| | Postgres Performance Tips | {1,2,3} | | Advanced intarray Usage | {1,3,5} | ``` ### Array manipulation and combining Get all unique tags used across articles "Postgres Performance Tips" and "Introduction to SQL": ```sql SELECT uniq(sort(a1.tag_ids + a2.tag_ids)) AS combined_unique_tags FROM articles a1, articles a2 WHERE a1.title = 'Postgres Performance Tips' AND a2.title = 'Introduction to SQL'; ``` Output: ```text | combined_unique_tags | |----------------------| | {1,2,3,4} | ``` Find common tags between "Postgres Performance Tips" and "Advanced intarray Usage" (intersection): ```sql SELECT a1.tag_ids & a2.tag_ids AS common_tags FROM articles a1 CROSS JOIN articles a2 WHERE a1.title = 'Postgres Performance Tips' AND a2.title = 'Advanced intarray Usage'; ``` Output: ```text | common_tags | |-------------| | {1,3} | ``` Add a new tag `7` to "Introduction to SQL": ```sql UPDATE articles SET tag_ids = tag_ids + 7 WHERE title = 'Introduction to SQL' RETURNING title, tag_ids; ``` Output: ```text | title | tag_ids | |-----------------------|-----------| | Introduction to SQL | {2,4,7} | ``` Remove tag `2` from "Postgres Performance Tips": ```sql UPDATE articles SET tag_ids = tag_ids - 2 WHERE title = 'Postgres Performance Tips' RETURNING title, tag_ids; ``` Output: ```text | title | tag_ids | |---------------------------|---------| | Postgres Performance Tips | {1,3} | ``` ### Using `query_int` for complex searches Find articles tagged with `1` AND (either `3` OR `4`): ```sql SELECT title, tag_ids FROM articles WHERE tag_ids @@ '1 & (3|4)'::query_int; ``` ```text | title | tag_ids | |---------------------------|---------| | Advanced intarray Usage | {1,3,5} | | Postgres Performance Tips | {1,3} | ``` ### Using `intarray` functions Find the index of tag `3` in "Postgres Performance Tips": ```sql SELECT title, idx(tag_ids, 3) AS index_of_tag_3 FROM articles WHERE title = 'Postgres Performance Tips'; ``` Output: ```text | title | index_of_tag_3 | |---------------------------|----------------| | Postgres Performance Tips | 2 | ``` ## Indexing with `intarray` `intarray` provides excellent indexing capabilities for its operators, which is crucial for performance on large datasets. It supports both GiST and GIN indexes. These indexes can accelerate queries using `&&`, `@>`, `<@`, `@@`, and array equality. ### GiST Index operator classes - `gist__int_ops`: Suitable for small to medium-sized datasets. It approximates an integer set as an array of integer ranges. - Optional parameter: `numranges` (default 100, range 1-253). Defines the maximum number of ranges in an index key. Larger values lead to more precise (faster) searches but larger indexes. - `gist__intbig_ops`: Better for large datasets (columns with many distinct array values). It approximates an integer set as a bitmap signature. - Optional parameter: `siglen` (default 16 bytes, range 1-2024 bytes). Defines the signature length. Longer signatures mean more precise searches but larger indexes. > GiST index doesn't benefit from `<@` operator. **Example GiST Index:** To create a GiST index on `tag_ids` using `gist__intbig_ops` with a signature length of 32 bytes: ```sql CREATE INDEX idx_articles_tag_ids_gist ON articles USING GIST (tag_ids gist__intbig_ops (siglen = 32)); ``` To use the `gist__int_ops`: ```sql CREATE INDEX idx_articles_tag_ids_gist_default ON articles USING GIST (tag_ids gist__int_ops); ``` You can also specify parameters for `gist__int_ops`, for example: ```sql CREATE INDEX idx_articles_tag_ids_gist_custom_ranges ON articles USING GIST (tag_ids gist__int_ops (numranges = 50)); ``` ### GIN Index operator class `gin__int_ops`: This is a non-default GIN operator class. It supports `&&`, `@>`, `@@`, and also `<@`. **Example GIN Index:** ```sql CREATE INDEX idx_articles_tag_ids_gin ON articles USING GIN (tag_ids gin__int_ops); ``` ## Practical applications - **Tagging systems:** Efficiently find items associated with specific tags, combinations of tags, or overlapping tag sets. - **Access Control Lists (ACLs):** Store group memberships or resource permissions as integer arrays and quickly check if a user (belonging to certain groups) has access to a resource. - **Product categorization:** Manage products belonging to multiple categories and find products based on category inclusion/exclusion criteria. - **Recommendation engines:** Identify items with similar properties by checking for overlaps in their feature ID arrays. ## Conclusion The intarray extension provides a powerful set of tools within Postgres for efficiently managing and querying integer arrays. Its rich functions and operators are designed to significantly improve performance, particularly during complex array operations. ## Resources - [PostgreSQL `intarray` documentation](https://www.postgresql.org/docs/current/intarray.html) --- # Source: https://neon.com/llms/extensions-ltree.txt # The ltree extension > The document details the ltree extension for Neon, which enables users to store and query hierarchical data structures using labels and paths within a PostgreSQL database. ## Source - [The ltree extension HTML](https://neon.com/docs/extensions/ltree): The original HTML version of this documentation The `ltree` extension provides a data type for representing labels of data stored in a hierarchical tree-like structure. It offers specialized functions and operators for efficiently traversing and searching through these tree structures, making it ideal for modeling hierarchical relationships in your data. This guide covers the basics of the `ltree` extension - how to enable it, create hierarchical data structures, and query tree data with examples. The `ltree` extension is valuable for scenarios like organizational charts, file systems, category hierarchies, or any data that naturally fits into a parent-child relationship model. **Note**: `ltree` is an open-source extension for Postgres that can be installed on any compatible Postgres instance. Detailed information about the extension is available in the [PostgreSQL Documentation](https://www.postgresql.org/docs/current/ltree.html). **Version availability** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date information. ## Enable the `ltree` extension Enable the extension by running the following SQL statement in your Postgres client: ```sql CREATE EXTENSION IF NOT EXISTS ltree; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Understanding ltree data The `ltree` data type represents a path of labels separated by dots, similar to file system paths. Each label consists of alphanumeric characters and underscores, with a maximum length of 255 characters. Here are some examples of valid `ltree` values: ``` world world.europe.uk world.europe.uk.london tech.database.postgres.extensions ``` The dots in these paths represent hierarchical relationships, with each segment being a label in the tree. This structure allows for efficient traversal and querying of hierarchical data. ## Example usage Let's explore how to use the `ltree` extension with a practical example of a product category hierarchy for an e-commerce platform. **Creating a table with ltree column** First, let's create a table to store our product categories: ```sql CREATE TABLE product_categories ( id SERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL, path ltree NOT NULL ); ``` **Inserting hierarchical data** Now, let's insert some sample data representing a product category hierarchy: ```sql INSERT INTO product_categories (name, path) VALUES ('Electronics', 'electronics'), ('Computers', 'electronics.computers'), ('Laptops', 'electronics.computers.laptops'), ('Gaming Laptops', 'electronics.computers.laptops.gaming'), ('Business Laptops', 'electronics.computers.laptops.business'), ('Desktop Computers', 'electronics.computers.desktops'), ('Smartphones', 'electronics.smartphones'), ('Android Phones', 'electronics.smartphones.android'), ('iOS Phones', 'electronics.smartphones.ios'), ('Clothing', 'clothing'), ('Men''s Clothing', 'clothing.mens'), ('Women''s Clothing', 'clothing.womens'), ('Children''s Clothing', 'clothing.childrens'); ``` ## Querying ltree data The `ltree` extension provides several operators and functions for querying hierarchical data. Let's explore some common query patterns: **Finding all descendants of a node** To find all subcategories under "Computers", we can use the `<@` operator, which checks if the path on the right is an ancestor of the path on the left: ```sql SELECT id, name, path FROM product_categories WHERE path <@ 'electronics.computers'; ``` This query returns: ```text | id | name | path | |----|--------------------|----------------------------------------| | 2 | Computers | electronics.computers | | 3 | Laptops | electronics.computers.laptops | | 4 | Gaming Laptops | electronics.computers.laptops.gaming | | 5 | Business Laptops | electronics.computers.laptops.business | | 6 | Desktop Computers | electronics.computers.desktops | ``` **Finding all ancestors of a node** To find all parent categories of "Gaming Laptops", we can use the `@>` operator, which checks if the path on the left is an ancestor of the path on the right: ```sql SELECT id, name, path FROM product_categories WHERE path @> 'electronics.computers.laptops.gaming'; ``` This query returns: ```text | id | name | path | |----|----------------|------------------------------------------| | 1 | Electronics | electronics | | 2 | Computers | electronics.computers | | 3 | Laptops | electronics.computers.laptops | | 4 | Gaming Laptops | electronics.computers.laptops.gaming | ``` **Finding nodes at a specific level** To find all categories at the second level of the hierarchy, we can use the `nlevel()` function, which returns the number of labels in an `ltree` path: ```sql SELECT id, name, path FROM product_categories WHERE nlevel(path) = 2; ``` This query returns: ```text | id | name | path | |----|--------------------|------------------------| | 2 | Computers | electronics.computers | | 7 | Smartphones | electronics.smartphones| | 11 | Men's Clothing | clothing.mens | | 12 | Women's Clothing | clothing.womens | | 13 | Children's Clothing| clothing.childrens | ``` **Pattern matching with wildcards** The `ltree` extension supports pattern matching using the `~` operator with a `lquery` pattern. The `lquery` syntax allows for wildcards and other pattern matching features: ```sql -- Find all laptop categories (using * wildcard) SELECT id, name, path FROM product_categories WHERE path ~ 'electronics.computers.laptops.*'; ``` This query returns: ```text | id | name | path | |----|------------------|----------------------------------------| | 4 | Gaming Laptops | electronics.computers.laptops.gaming | | 5 | Business Laptops | electronics.computers.laptops.business | ``` You can also use more complex patterns: ```sql -- Find categories that match a specific pattern -- * matches zero or more labels SELECT id, name, path FROM product_categories WHERE path ~ '*.*.ios' ``` This would match paths like `electronics.smartphones.ios`. ## Advanced ltree operations The `ltree` extension provides several advanced operations for working with hierarchical data: **Extracting subpaths** You can extract specific parts of an `ltree` path using the `subpath()` function: ```sql -- Extract the first two labels from the path SELECT id, name, subpath(path, 0, 2) AS subpath FROM product_categories WHERE path = 'electronics.computers.laptops.gaming'; ``` This query returns: ```text | id | name | subpath | |----|----------------|-----------------------| | 4 | Gaming Laptops | electronics.computers | ``` **Finding the least common ancestor** The `lca()` function finds the least common ancestor of a set of paths: ```sql -- Find the least common ancestor of gaming laptops and business laptops SELECT lca( 'electronics.computers.laptops.gaming'::ltree, 'electronics.computers.laptops.business'::ltree ) AS common_ancestor; ``` This query returns: ```text | common_ancestor | |-------------------------------| | electronics.computers.laptops | ``` **Calculating the distance between nodes** You can calculate the "distance" between two nodes in the tree: ```sql -- Calculate the distance between two categories SELECT nlevel('electronics.computers.laptops.gaming'::ltree) + nlevel('electronics.smartphones.android'::ltree) - 2 * nlevel(lca( 'electronics.computers.laptops.gaming'::ltree, 'electronics.smartphones.android'::ltree )) AS distance; ``` This query returns: ```text | distance | |----------| | 5 | ``` The distance is calculated as the sum of the levels of both paths minus twice the level of their least common ancestor. ## Indexing ltree data For efficient querying of `ltree` data, especially in large datasets, you should create appropriate indexes: ```sql -- Create a GiST index for ancestor/descendant queries CREATE INDEX idx_path_gist ON product_categories USING GIST (path); -- Create a B-tree index for equality queries CREATE INDEX idx_path_btree ON product_categories USING BTREE (path); ``` The GiST index is particularly useful for ancestor/descendant queries using the `@>` and `<@` operators, while the B-tree index is better for equality comparisons. ## Practical applications The `ltree` extension is useful in many real-world scenarios: 1. **Organization charts**: Representing company hierarchies with departments, teams, and employees 2. **File systems**: Modeling directory structures 3. **E-commerce categories**: As demonstrated in our example 4. **Taxonomies**: Biological classifications, knowledge categorization 5. **Menu structures**: Website navigation hierarchies 6. **Geographic hierarchies**: Continent > Country > State > City ## Conclusion The `ltree` extension provides a powerful way to store and query hierarchical data in Postgres. Its specialized data type and operators make it efficient to work with tree-like structures, offering significant advantages over traditional recursive queries or adjacency list models. By using `ltree`, you can simplify complex hierarchical data operations, improve query performance, and create more maintainable code for applications that deal with nested structures. ## Resources - [PostgreSQL ltree documentation](https://www.postgresql.org/docs/current/ltree.html) - [Indexing strategies for ltree](https://www.postgresql.org/docs/current/ltree.html#id-1.11.7.31.7) --- # Source: https://neon.com/llms/extensions-neon-utils.txt # The neon_utils extension > The document details the neon_utils extension, which enhances Neon's functionality by offering additional utilities for managing and optimizing database operations within the Neon environment. ## Source - [The neon_utils extension HTML](https://neon.com/docs/extensions/neon-utils): The original HTML version of this documentation The `neon_utils` extension provides a `num_cpus()` function you can use to monitor how Neon's _Autoscaling_ feature allocates vCPU in response to workload. The function returns the current number of allocated vCPUs. For information about Neon's _Autoscaling_ feature, see [Autoscaling](https://neon.com/docs/introduction/autoscaling). ## Install the `neon_utils` extension Install the `neon_utils` extension by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION neon_utils; ``` For information about using the Neon **SQL Editor**, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Use the `num_cpus()` function In Neon, computing capacity is measured in _Compute Units (CU)_. One CU is 1 vCPU and 4 GB of RAM, 2 CU is 2 vCPU and 8 GB of RAM, and so on. The amount of RAM in GB is always 4 times the number of vCPU. A Neon compute can have anywhere from .25 to 56 CU, but _Autoscaling_ is only supported up to 16 CU. Defining a minimum and maximum compute size for your compute, as shown below, enables autoscaling. As your workload changes, computing capacity scales dynamically between the minimum and maximum settings defined in your compute configuration. To retrieve the number of allocated vCPU at any point in time, you can run the following query: ```sql SELECT num_cpus(); ``` For autoscaling configuration instructions, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). ## Limitations The following limitations apply: - The `num_cpus()` function does not return fractional vCPU sizes. The _Autoscaling_ feature can scale by fractional vCPU, but the `num_cpus()` function reports the next whole number. For example, if the current number of allocated vCPU is `.25` or `.5`, the `num_cpus()` function returns `1`. - The `num_cpus()` function only works on computes that have the _Autoscaling_ feature enabled. Running the function on a fixed-size compute does not return a correct value. ## Observe autoscaling with `neon_utils` and `pgbench` The following instructions demonstrate how you can use the `num_cpus()` function with `pgbench` to observe how Neon's _Autoscaling_ feature responds to workload. ### Prerequisites - Ensure that autoscaling is enabled for your compute. For instructions, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). The following example uses a minimum setting of 0.25 Compute Units (CU) and a maximum of 4. - The [pgbench](https://www.postgresql.org/docs/current/pgbench.html) utility. ### Run the test 1. Install the `neon_utils` extension: ```sql CREATE EXTENSION IF NOT EXISTS neon_utils; ``` 2. Create a `test.sql` file with the following queries: ```sql SELECT LOG(factorial(5000)) / LOG(factorial(2500)); SELECT txid_current(); ``` 3. To avoid errors when running `pgbench`, initialize your database with the tables used by `pgbench`. This can be done using the `pgbench -i` command, specifying the connection string for your Neon database. You can obtain a connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ```bash pgbench -i postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` 4. Run a `pgbench` test with your `test.sql` file, specifying your connection string: ```bash pgbench -f test.sql -c 15 -T 1000 -P 1 postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` The test produces output similar to the following on a compute set to scale from 0.25 to 4 CUs. ```bash pgbench (15.3) starting vacuum...end. progress: 8.4 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed progress: 9.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed progress: 10.0 s, 4.0 tps, lat 1246.290 ms stddev 3.253, 0 failed progress: 11.0 s, 6.0 tps, lat 1892.455 ms stddev 446.686, 0 failed progress: 12.0 s, 9.0 tps, lat 2091.352 ms stddev 1068.303, 0 failed progress: 13.0 s, 5.0 tps, lat 1881.682 ms stddev 700.852, 0 failed progress: 14.0 s, 6.0 tps, lat 2660.009 ms stddev 1404.672, 0 failed progress: 15.0 s, 9.0 tps, lat 2354.776 ms stddev 1248.686, 0 failed progress: 16.0 s, 8.0 tps, lat 1770.870 ms stddev 776.465, 0 failed progress: 17.0 s, 7.0 tps, lat 1800.686 ms stddev 611.749, 0 failed progress: 18.0 s, 18.0 tps, lat 1681.841 ms stddev 1187.918, 0 failed progress: 19.0 s, 29.0 tps, lat 561.201 ms stddev 139.565, 0 failed progress: 20.0 s, 27.0 tps, lat 507.782 ms stddev 153.889, 0 failed progress: 21.0 s, 30.0 tps, lat 493.312 ms stddev 121.688, 0 failed progress: 22.0 s, 32.0 tps, lat 513.444 ms stddev 185.033, 0 failed progress: 23.0 s, 32.0 tps, lat 503.135 ms stddev 199.435, 0 failed progress: 24.0 s, 28.0 tps, lat 492.913 ms stddev 124.019, 0 failed progress: 25.0 s, 43.0 tps, lat 366.719 ms stddev 123.547, 0 failed progress: 26.0 s, 49.0 tps, lat 334.276 ms stddev 79.043, 0 failed progress: 27.0 s, 40.0 tps, lat 354.922 ms stddev 83.560, 0 failed progress: 28.0 s, 31.0 tps, lat 400.645 ms stddev 29.236, 0 failed progress: 29.0 s, 48.0 tps, lat 373.522 ms stddev 64.446, 0 failed progress: 30.0 s, 44.0 tps, lat 333.343 ms stddev 86.497, 0 failed progress: 31.0 s, 44.0 tps, lat 326.754 ms stddev 82.990, 0 failed progress: 32.0 s, 44.0 tps, lat 329.317 ms stddev 76.728, 0 failed progress: 33.0 s, 53.0 tps, lat 321.572 ms stddev 76.427, 0 failed progress: 34.0 s, 57.0 tps, lat 254.500 ms stddev 33.013, 0 failed progress: 35.0 s, 60.0 tps, lat 251.035 ms stddev 37.574, 0 failed progress: 36.0 s, 58.0 tps, lat 256.846 ms stddev 36.390, 0 failed progress: 37.0 s, 60.0 tps, lat 249.165 ms stddev 36.764, 0 failed progress: 38.0 s, 57.0 tps, lat 263.885 ms stddev 31.351, 0 failed progress: 39.0 s, 56.0 tps, lat 262.529 ms stddev 43.900, 0 failed progress: 40.0 s, 58.0 tps, lat 259.052 ms stddev 39.737, 0 failed ... ``` 5. Call the `num_cpus()` function to retrieve the current number of allocated vCPU. ```sql ​​neondb=> SELECT num_cpus(); num_cpus ---------- 4 (1 row) ``` --- # Source: https://neon.com/llms/extensions-neon.txt # The neon extension > The document details the installation and usage of the Neon extension, which enhances PostgreSQL with features like branching and time travel, specifically designed for Neon database users. ## Source - [The neon extension HTML](https://neon.com/docs/extensions/neon): The original HTML version of this documentation The `neon` extension provides functions and views designed to gather Neon-specific metrics. - [The `neon_stat_file_cache` view](https://neon.com/docs/extensions/neon#the-neon_stat_file_cache-view) - [Views for Neon internal use](https://neon.com/docs/extensions/neon#views-for-neon-internal-use) ## The neon_stat_file_cache view The `neon_stat_file_cache` view provides insights into how effectively your Neon compute's Local File Cache (LFC) is being used. ## What is the Local File Cache? Neon computes have a Local File Cache (LFC), which is a layer of caching that stores frequently accessed data in the local memory of the Neon compute. Like Postgres [shared buffers](https://neon.com/docs/reference/glossary#shared-buffers), the LFC reduces latency and improves query performance by minimizing the need to fetch data from Neon storage. The LFC acts as an add-on or extension of Postgres shared buffers. In Neon computes, the `shared_buffers` parameter [scales with compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). The LFC extends the cache memory to approximately 75% of your compute's RAM. To view the LFC size for each Neon compute size, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). When data is requested, Postgres checks shared buffers first, then the LFC. If the requested data is not found in the LFC, it is read from Neon storage. Shared buffers and the LFC both cache your most recently accessed data, but they may not cache exactly the same data due to different cache eviction patterns. The LFC is also much larger than shared buffers, so it stores significantly more data. ## Monitoring Local File Cache usage You can monitor Local File Cache (LFC) usage by installing the `neon` extension on your database and querying the [neon_stat_file_cache](https://neon.com/docs/) view or [using EXPLAIN ANALYZE](https://neon.com/docs/extensions/neon#view-lfc-metrics-with-explain-analyze). Additionally, you can monitor the [Local file cache hit rate](https://neon.com/docs/introduction/monitoring-page#local-file-cache-hit-rate) graph on the **Monitoring** page in the Neon console. ## neon_stat_file_cache view The `neon_stat_file_cache` view includes the following metrics: - `file_cache_misses`: The number of times the requested page block is not found in Postgres shared buffers or the LFC. In this case, the page block is retrieved from Neon storage. - `file_cache_hits`: The number of times the requested page block was not found in Postgres shared buffers but was found in the LFC. - `file_cache_used`: The number of times the LFC was accessed. - `file_cache_writes`: The number of writes to the LFC. A write occurs when a requested page block is not found in Postgres shared buffers or the LFC. In this case, the data is retrieved from Neon storage and then written to shared buffers and the LFC. - `file_cache_hit_ratio`: The percentage of database requests that are served from the LFC rather than Neon storage. This is a measure of cache efficiency, indicating how often requested data is found in the cache. A higher cache hit ratio suggests better performance, as accessing data from memory is faster than accessing data from storage. The ratio is calculated using the following formula: ``` file_cache_hit_ratio = (file_cache_hits / (file_cache_hits + file_cache_misses)) * 100 ``` For OLTP workloads, you should aim for a cache hit ratio of 99% or better. However, the ideal cache hit ratio depends on your specific workload and data access patterns. In some cases, a slightly lower ratio might still be acceptable, especially if the workload involves a lot of sequential scanning of large tables where caching might be less effective. If you find that your cache hit ration is quite low, your working set may not be fully or adequately in memory. In this case, consider using a larger compute with more memory. Please keep in mind that the statistics are for the entire compute, not specific databases or tables. ### Using the neon_stat_file_cache view To use the `neon_stat_file_cache` view, install the `neon` extension on your database: To install the extension on a database: ```sql CREATE EXTENSION neon; ``` To connect to your database. You can find a connection string for your database on the Neon Dashboard. ```bash psql postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Issue the following query to view LFC usage data for your compute: ```sql SELECT * FROM neon_stat_file_cache; file_cache_misses | file_cache_hits | file_cache_used | file_cache_writes | file_cache_hit_ratio -------------------+-----------------+-----------------+-------------------+---------------------- 2133643 | 108999742 | 607 | 10767410 | 98.08 ``` **Note**: Local File Cache statistics represent the lifetime of your compute, from the last time the compute started until the time you ran the query. Be aware that statistics are lost when your compute stops and gathered again from scratch when your compute restarts. You'll only want to run the cache hit ratio query after a representative workload has been run. For example, say that you increased your compute size after seeing a cache hit ratio below 99%. Changing the compute size restarts your compute, so you lose all of your current usage statistics. In this case, you should run your workload before you try the cache hit ratio query again to see if your cache hit ratio improved. Remember that Postgres checks shared buffers first before it checks your compute's Local File Cache. If you are only working with a small amount of data, queries may be served entirely from the shared buffers, resulting in no LFC hits. ## View LFC metrics with EXPLAIN ANALYZE You can also use `EXPLAIN ANALYZE` with the `FILECACHE` and `PREFETCH` options to view LFC cache hit and miss data, as well as prefetch statistics. Installing the `neon` extension is not required. For example: ```sql {5,6,11,12,15,16,20,21} EXPLAIN (ANALYZE,BUFFERS,PREFETCH,FILECACHE) SELECT COUNT(*) FROM pgbench_accounts; Finalize Aggregate (cost=214486.94..214486.95 rows=1 width=8) (actual time=5195.378..5196.034 rows=1 loops=1) Buffers: shared hit=178875 read=143691 dirtied=128597 written=127346 Prefetch: hits=0 misses=1865 expired=0 duplicates=0 File cache: hits=141826 misses=1865 -> Gather (cost=214486.73..214486.94 rows=2 width=8) (actual time=5195.366..5196.025 rows=3 loops=1) Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=178875 read=143691 dirtied=128597 written=127346 Prefetch: hits=0 misses=1865 expired=0 duplicates=0 File cache: hits=141826 misses=1865 -> Partial Aggregate (cost=213486.73..213486.74 rows=1 width=8) (actual time=5187.670..5187.670 rows=1 loops=3) Buffers: shared hit=178875 read=143691 dirtied=128597 written=127346 Prefetch: hits=0 misses=1865 expired=0 duplicates=0 File cache: hits=141826 misses=1865 -> Parallel Index Only Scan using pgbench_accounts_pkey on pgbench_accounts (cost=0.43..203003.02 rows=4193481 width=0) (actual time=0.574..4928.995 rows=3333333 loops=3) Heap Fetches: 3675286 Buffers: shared hit=178875 read=143691 dirtied=128597 written=127346 Prefetch: hits=0 misses=1865 expired=0 duplicates=0 File cache: hits=141826 misses=1865 ``` ### PREFETCH option The `PREFETCH` option provides information about Neon's prefetching mechanism, which predicts which pages will be needed soon and sends prefetch requests to the page server before the page is actually requested by the executor. This helps reduce latency by having data ready when it's needed. The PREFETCH option includes the following metrics: - `hits` - Number of pages received from the page server before actually requested by the executor. Prefetch distance is controlled by the `effective_io_concurrency` parameter. The larger this value, the more likely the page server will complete the request before it's needed. However, it should not be larger than `neon.prefetch_buffer_size`. - `misses` - Number of accessed pages that were not prefetched. Prefetch is not implemented for all plan nodes, and even for supported nodes (like sequential scan), some mispredictions can occur. - `expired` - Pages that were updated since the prefetch request was sent, or results that weren't used because the executor didn't need the page (for example, due to a `LIMIT` clause in the query). - `duplicates` - Multiple prefetch requests for the same page. For some nodes like sequential scan, predicting next pages is straightforward. However, for index scans that prefetch referenced heap pages, index entries can have multiple references to the same heap page, resulting in duplicate prefetch requests. ### FILECACHE option The `FILECACHE` option provides information about the Local File Cache (LFC) usage during query execution: - `hits` - Number of accessed pages found in the LFC. - `misses` - Number of accessed pages not found in the LFC. ## Views for Neon internal use The `neon` extension is installed by default to a system-owned `postgres` database in each Neon project. The `postgres` database includes functions and views owned by the Neon system role (`cloud_admin`) that are used to collect statistics. This data helps the Neon team enhance the Neon service. **Views**: ```sql postgres=> \dv List of relations Schema | Name | Type | Owner --------+----------------------------+------+------------- public | local_cache | view | cloud_admin public | neon_backend_perf_counters | view | cloud_admin public | neon_lfc_stats | view | cloud_admin public | neon_perf_counters | view | cloud_admin public | neon_stat_file_cache | view | cloud_admin ``` **Functions**: ```sql postgres=> \df List of functions Schema | Name | Result data type | Argument data types | Type --------+--------------------------------------+------------------+-------------------------------------------------------------------------------------------------------------+------ public | approximate_working_set_size | integer | reset boolean | func public | approximate_working_set_size_seconds | integer | duration integer DEFAULT NULL::integer | func public | backpressure_lsns | record | OUT received_lsn pg_lsn, OUT disk_consistent_lsn pg_lsn, OUT remote_consistent_lsn pg_lsn | func public | backpressure_throttling_time | bigint | | func public | get_backend_perf_counters | SETOF record | | func public | get_local_cache_state | bytea | max_chunks integer DEFAULT NULL::integer | func public | get_perf_counters | SETOF record | | func public | get_prewarm_info | record | OUT total_pages integer, OUT prewarmed_pages integer, OUT skipped_pages integer, OUT active_workers integer | func public | local_cache_pages | SETOF record | | func public | neon_get_lfc_stats | SETOF record | | func public | pg_cluster_size | bigint | | func public | prewarm_local_cache | void | state bytea, n_workers integer DEFAULT 1 | func ``` --- # Source: https://neon.com/llms/extensions-online_advisor.txt # The online_advisor extension > The document details the "online_advisor" extension for Neon, which assists users in optimizing database performance by providing real-time recommendations and insights. ## Source - [The online_advisor extension HTML](https://neon.com/docs/extensions/online_advisor): The original HTML version of this documentation The `online_advisor` extension recommends **indexes**, **extended statistics**, and **prepared statements** based on your actual query workload. It uses the same executor hook mechanism as [`auto_explain`](https://www.postgresql.org/docs/current/auto-explain.html) to collect and analyze execution data. ## What it does - Suggests **indexes** when queries filter many rows - Suggests **extended statistics** when the planner's row estimates are far off from actuals - Identifies queries that could benefit from **prepared statements** when planning time is high **Note**: `online_advisor` only makes recommendations. It does not create indexes or statistics for you. ## Requirements - Supported on Postgres **17** - Create the extension in every database you want to inspect - Activate it by calling any provided function (for example, `get_executor_stats()`) ### Version availability Please refer to the [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions) page for the latest supported version of `online_advisor` on Neon. ## Enable the online_advisor extension You can create the extension in each target database using `CREATE EXTENSION`: ```sql CREATE EXTENSION online_advisor; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Start collecting recommendations 1. Activate the extension by calling any function: ```sql SELECT get_executor_stats(); ``` 2. Run your workload to collect data. 3. View recommendations: ```sql -- Proposed indexes SELECT * FROM proposed_indexes ORDER BY elapsed_sec DESC; -- Proposed extended statistics SELECT * FROM proposed_statistics ORDER BY elapsed_sec DESC; ``` ## Apply accepted recommendations Run the `create_index` or `create_statistics` statement from the views, then analyze the table: ```sql CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_customer_date ON orders(customer_id, order_date); VACUUM (ANALYZE) orders; ``` ## Configure thresholds You can tune `online_advisor` with these settings: | Setting | Default | Description | | ---------------------------------------- | ------- | ---------------------------------------------------------------------------------------- | | `online_advisor.filtered_threshold` | `1000` | Minimum filtered rows in a node to suggest an index. | | `online_advisor.misestimation_threshold` | `10` | Minimum actual/estimated row ratio to flag misestimation. | | `online_advisor.min_rows` | `1000` | Minimum returned rows before misestimation is considered. | | `online_advisor.max_index_proposals` | `1000` | Max tracked clauses for index proposals (system-level, read-only on Neon). | | `online_advisor.max_stat_proposals` | `1000` | Max tracked clauses for extended statistics proposals (system-level, read-only on Neon). | | `online_advisor.do_instrumentation` | `on` | Toggle data collection. | | `online_advisor.log_duration` | `off` | Log planning/execution time for each query. | | `online_advisor.prepare_threshold` | `1.0` | Planning/execution time ratio above which to suggest prepared statements. | **Note**: On Neon, you can only modify session-level settings using `SET`. System-level settings like `online_advisor.max_index_proposals` and `online_advisor.max_stat_proposals` use default values and cannot be changed. If you need different system-level settings, reach out to Neon Support. Change a setting for the current session: ```sql SET online_advisor.filtered_threshold = 2000; ``` ## Check planning and execution stats Use `get_executor_stats()` to see planning and execution times and whether prepared statements might help: ```sql SELECT * FROM get_executor_stats(false); -- false = do not reset counters ``` Look at `avg_planning_overhead`. Values greater than `1` suggest that some queries would benefit from prepared statements. ## Combine or separate index proposals By default, `online_advisor` tries to combine related predicates into a single compound index. To view separate recommendations for each predicate: ```sql SELECT * FROM propose_indexes(false); ``` ## Limitations - Does not check operator ordering for compound indexes - Does not suggest indexes for joins or `ORDER BY` clauses - Does not estimate the benefit of adding an index — pair with [HypoPG](https://github.com/HypoPG/hypopg#) if you want to simulate usage - Recommendations are per database ## Remove the extension ```sql DROP EXTENSION IF EXISTS online_advisor; ``` If you're not using it anywhere, remove it from `shared_preload_libraries` and restart. ## Example workflow ```sql -- Activate and run workload SELECT get_executor_stats(); -- View index proposals SELECT create_index, n_filtered, n_called, elapsed_sec FROM proposed_indexes ORDER BY elapsed_sec DESC LIMIT 10; -- View extended statistics proposals SELECT create_statistics, misestimation, n_called, elapsed_sec FROM proposed_statistics ORDER BY misestimation DESC LIMIT 10; -- Apply a recommendation CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_customer_date ON orders(customer_id, order_date); VACUUM (ANALYZE) orders; -- Check planning/execution times SELECT * FROM get_executor_stats(true); -- reset after reading ``` ## Resources - [online_advisor GitHub repository](https://github.com/knizhnik/online_advisor) - [PostgreSQL auto_explain documentation](https://www.postgresql.org/docs/current/auto-explain.html) --- # Source: https://neon.com/llms/extensions-pg-extensions.txt # Supported Postgres extensions > The document lists and describes the PostgreSQL extensions supported by Neon, detailing compatibility and usage specifics for each extension within Neon's database environment. ## Source - [Supported Postgres extensions HTML](https://neon.com/docs/extensions/pg-extensions): The original HTML version of this documentation Neon supports the Postgres extensions shown in the following table. The supported version of the extension sometimes differs by Postgres version. A dash (`-`) indicates that an extension is not yet supported. **Need an extension we don't have?** 📩 [Request an extension](https://neon.com/docs/extensions/pg-extensions#request-an-extension) | Extension | PG14 | PG15 | PG16 | PG17 | PG18 | Notes | | :----------------------------------------------------------------------------------------------- | ------: | ------: | ------: | ------: | -----: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | [address_standardizer](https://postgis.net/docs/Extras.html#Address_Standardizer) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [address_standardizer_data_us](https://postgis.net/docs/Extras.html#Address_Standardizer_Tables) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [anon](https://neon.com/docs/extensions/postgresql-anonymizer) | 2.4.1 | 2.4.1 | 2.4.1 | 2.4.1 | 2.4.1 | | | [autoinc (spi)](https://www.postgresql.org/docs/current/contrib-spi.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [bloom](https://www.postgresql.org/docs/16/bloom.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [btree_gin](https://neon.com/docs/extensions/btree_gin) | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | | | [btree_gist](https://neon.com/docs/extensions/btree_gist) | 1.6 | 1.7 | 1.7 | 1.7 | 1.8 | | | [citext](https://neon.com/docs/extensions/citext) | 1.6 | 1.6 | 1.6 | 1.6 | 1.8 | | | [cube](https://neon.com/docs/extensions/cube) | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | | | [dblink](https://neon.com/docs/extensions/dblink) | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | | | [dict_int](https://neon.com/docs/extensions/dict_int) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [earthdistance](https://neon.com/docs/extensions/earthdistance) | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | To use `earthdistance`, you first need to install its dependency, the [`cube` extension](https://neon.com/docs/extensions/cube). Run: `CREATE EXTENSION IF NOT EXISTS cube; CREATE EXTENSION IF NOT EXISTS earthdistance`; | | [fuzzystrmatch](https://neon.com/docs/extensions/fuzzystrmatch) | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | | | [h3](https://neon.com/docs/extensions/postgis-related-extensions#h3-and-h3-postgis) | 4.1.3 | 4.1.3 | 4.1.3 | 4.1.3 | 4.2.3 | Some components have been split out into the `h3_postgis` extension. Install both the `h3` and `h3_postgis` extensions. | | [h3_postgis](https://neon.com/docs/extensions/postgis-related-extensions#h3-and-h3-postgis) | 4.1.3 | 4.1.3 | 4.1.3 | 4.1.3 | 4.2.3 | Install with `CREATE EXTENSION h3_postgis CASCADE;` (requires `postgis` and `postgis_raster`) | | [hll](https://github.com/citusdata/postgresql-hll) | 2.19 | 2.19 | 2.19 | 2.19 | 2.19 | | | [hstore](https://neon.com/docs/extensions/hstore) | 1.8 | 1.8 | 1.8 | 1.8 | 1.8 | | | [hypopg](https://hypopg.readthedocs.io/en/rel1_stable/) | 1.4.2 | 1.4.2 | 1.4.2 | 1.4.2 | 1.4.2 | | | [insert_username (spi)](https://www.postgresql.org/docs/current/contrib-spi.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [intagg](https://www.postgresql.org/docs/16/intagg.html) | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | | | [intarray](https://neon.com/docs/extensions/intarray) | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | | | [ip4r](https://github.com/RhodiumToad/ip4r) | 2.4 | 2.4 | 2.4 | 2.4 | 2.4 | | | [isn](https://www.postgresql.org/docs/16/isn.html) | 1.2 | 1.2 | 1.2 | 1.2 | 1.3 | | | [lo](https://www.postgresql.org/docs/16/lo.html) | 1.1 | 1.1 | 1.1 | 1.1 | 1.2 | | | [ltree](https://neon.com/docs/extensions/ltree) | 1.2 | 1.2 | 1.2 | 1.3 | 1.3 | | | [moddatetime (spi)](https://www.postgresql.org/docs/current/contrib-spi.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [neon](https://neon.com/docs/extensions/neon) | 1.9 | 1.9 | 1.9 | 1.9 | 1.9 | | | [neon_utils](https://neon.com/docs/extensions/neon-utils) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [online_advisor](https://neon.com/docs/extensions/online_advisor) | - | - | - | 1.0 | - | | | [pg_cron](https://neon.com/docs/extensions/pg_cron) | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 | To install `pg_cron`, it must first be enabled. See [Enable the pg_cron extension](https://neon.com/docs/extensions/pg_cron#enable-the-pgcron-extension) for instructions. Please note that `pg_cron` jobs will only run when your compute is active. We therefore recommend only using `pg_cron` on computes that run 24/7 or where you have disabled [scale to zero](https://neon.com/docs/introduction/scale-to-zero). | | | | [pg_graphql](https://neon.com/docs/extensions/pg_graphql) | 1.5.11 | 1.5.11 | 1.5.11 | 1.5.11 | - | | | [pg_hashids](https://github.com/iCyberon/pg_hashids) | 1.2.1 | 1.2.1 | 1.2.1 | 1.2.1 | 1.2.1 | | | [pg_hint_plan](https://github.com/ossc-db/pg_hint_plan) | 1.4.1 | 1.5.0 | 1.6.0 | 1.7.0 | 1.8.0 | | | [pg_ivm](https://github.com/sraoss/pg_ivm) | 1.9 | 1.9 | 1.9 | 1.9 | 1.12 | The `create_immv` function is created in the Postgres `public` schema by default, not the `pg_ivm` schema. In this case, run `SELECT create_immv()` instead of `SELECT pg_ivm.create_immv()`. | | [pg_jsonschema](https://github.com/supabase/pg_jsonschema) | 0.3.3 | 0.3.3 | 0.3.3 | 0.3.3 | - | | | [pg_mooncake](https://neon.com/docs/extensions/pg_mooncake) | 0.1.2 | 0.1.2 | 0.1.2 | 0.1.2 | - | This extension is **experimental**. Using a separate, dedicated Neon project is recommended. Run `SET neon.allow_unstable_extensions='true';` before installing. See the [YouTube demo](https://youtu.be/QDNsxw_3ris?feature=shared&t=2048) and the [pg_mooncake documentation](https://pgmooncake.com/docs). | | [pg_partman](https://github.com/pgpartman/pg_partman) | 5.1.0 | 5.1.0 | 5.1.0 | 5.1.0 | 5.1.0 | | | [pg_prewarm](https://neon.com/docs/extensions/pg_prewarm) | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | | | [pg_repack](https://neon.com/docs/extensions/pg_repack) | 1.5.2 | 1.5.2 | 1.5.2 | 1.5.2 | 1.5.2 | Available only on paid Neon plans. To install `pg_repack`, it must first be enabled by Neon Support. [Open a support ticket](https://console.neon.tech/app/projects?modal=support) with your endpoint ID and database name to request it. After it's enabled, you'll need to restart your compute before running `CREATE EXTENSION pg_repack;` To use `pg_repack`, you will need to [install the pg_repack CLI](https://reorg.github.io/pg_repack/#download). | | [pg_roaringbitmap](https://github.com/ChenHuajun/pg_roaringbitmap) | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | Install with `CREATE EXTENSION roaringbitmap;` | | [pg_session_jwt](https://neon.com/docs/data-api/get-started) | 0.3.1 | 0.3.1 | 0.3.1 | 0.3.1 | 0.3.1 | This extension provides JWT session management functionality used by the [Data API](https://neon.com/docs/data-api/get-started). | | [pg_stat_statements](https://neon.com/docs/extensions/pg_stat_statements) | 1.9 | 1.10 | 1.10 | 1.11 | 1.12 | | | [pg_tiktoken](https://neon.com/docs/extensions/pg_tiktoken) | 0.0.1 | 0.0.1 | 0.0.1 | 0.0.1 | 0.0.1 | The [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role has `EXECUTE` privilege on the `pg_stat_statements_reset()` function. | | [pg_trgm](https://neon.com/docs/extensions/pg_trgm) | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 | | | [pg_uuidv7](https://neon.com/docs/extensions/pg_uuidv7) | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 | | | [pgcrypto](https://neon.com/docs/extensions/pgcrypto) | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 | | | [pgjwt](https://github.com/michelp/pgjwt) | 0.2.0 | 0.2.0 | 0.2.0 | 0.2.0 | 0.2.0 | | | [pgrag](https://neon.com/docs/extensions/pgrag) | 0.0.0 | 0.0.0 | 0.0.0 | 0.0.0 | 0.0.0 | This extension is **experimental**. Using a separate, dedicated Neon project is recommended. Run `SET neon.allow_unstable_extensions='true';` before installing. | | [pgrouting](https://neon.com/docs/extensions/postgis-related-extensions#pgrouting) | 3.4.2 | 3.4.2 | 3.4.2 | 3.6.2 | 3.8.0 | The PostGIS extension must be installed first. | | [pgrowlocks](https://neon.com/docs/extensions/pgrowlocks) | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | | | [pgstattuple](https://neon.com/docs/extensions/pgstattuple) | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | | | [pgtap](https://pgtap.org/documentation.html) | 1.3.3 | 1.3.3 | 1.3.3 | 1.3.3 | 1.3.3 | | | [pgvector](https://neon.com/docs/extensions/pgvector) | 0.8.0 | 0.8.0 | 0.8.0 | 0.8.0 | 0.8.1 | Install with `CREATE EXTENSION vector;` | | [pg_search](https://neon.com/docs/extensions/pg_search) | 0.15.26 | 0.15.26 | 0.15.26 | 0.15.26 | - | Install with `CREATE EXTENSION pg_search;` on Postgres 17. | | [pgx_ulid](https://github.com/pksunkara/pgx_ulid) | 0.1.5 | 0.1.5 | 0.1.5 | 0.2.0 | - | Install with `CREATE EXTENSION ulid;` on Postgres 14, 15, 16. Install with `CREATE EXTENSION pgx_ulid; ` on Postgres 17. | | [plcoffee](https://coffeescript.org/) | 3.1.10 | 3.1.10 | 3.1.10 | - | - | | | [plls](https://livescript.net/) | 3.1.10 | 3.1.10 | 3.1.10 | - | - | | | [plpgsql](https://www.postgresql.org/docs/16/plpgsql.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | Pre-installed with Postgres. | | [plpgsql_check](https://pgxn.org/dist/plpgsql_check/) | 2.8.2 | 2.8.2 | 2.8.2 | 2.8.2 | 2.8.2 | | | [plv8](https://github.com/plv8/plv8) | 3.1.10 | 3.1.10 | 3.1.10 | 3.2.3 | - | | | [postgis](https://neon.com/docs/extensions/postgis) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [postgis_raster](https://postgis.net/docs/RT_reference.html) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [postgis_sfcgal](https://neon.com/docs/extensions/postgis-related-extensions#postgis-sfcgal) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [postgis_tiger_geocoder](https://neon.com/docs/extensions/postgis-related-extensions#postgis-tiger-geocoder) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | Cannot be installed using the Neon SQL Editor. Use your `psql` user credentials to install this extension. | | [postgis_topology](https://www.postgis.net/docs/Topology.html) | 3.3.3 | 3.3.3 | 3.3.3 | 3.5.0 | 3.6.0 | | | [postgres_fdw](https://neon.com/docs/extensions/postgres_fdw) | 1.1 | 1.1 | 1.1 | 1.1 | 1.2 | | | [prefix](https://github.com/dimitri/prefix) | 1.2.0 | 1.2.0 | 1.2.0 | 1.2.0 | 1.2.0 | | | [rdkit](https://github.com/rdkit/rdkit) | 4.3.0 | 4.3.0 | 4.3.0 | 4.6.0 | 4.8.0 | | | [refint (spi)](https://www.postgresql.org/docs/current/contrib-spi.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [rum](https://github.com/postgrespro/rum) | 1.3 | 1.3 | 1.3 | 1.3 | - | | | [seg](https://www.postgresql.org/docs/16/seg.html) | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 | | | [semver](https://pgxn.org/dist/semver) | 0.32.1 | 0.32.1 | 0.32.1 | 0.40.0 | 0.40.0 | | | [tablefunc](https://neon.com/docs/extensions/tablefunc) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [tcn](https://www.postgresql.org/docs/16/tcn.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [timescaledb](https://neon.com/docs/extensions/timescaledb) | 2.10.1 | 2.10.1 | 2.13.0 | 2.17.1 | - | Only Apache-2 licensed features are supported. Compression is not supported. | | [tsm_system_rows](https://www.postgresql.org/docs/16/tsm-system-rows.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [tsm_system_time](https://www.postgresql.org/docs/16/tsm-system-time.html) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | | [unaccent](https://neon.com/docs/extensions/unaccent) | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | | | [unit](https://github.com/df7cb/postgresql-unit) | 7 | 7 | 7 | 7 | - | | | [uuid-ossp](https://neon.com/docs/extensions/uuid-ossp) | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | Double-quote the extension name when installing: `CREATE EXTENSION "uuid-ossp"` | | [wal2json](https://neon.com/docs/extensions/wal2json) | 2.6 | 2.6 | 2.6 | 2.6 | 2.6 | `CREATE EXTENSION` not required. This decoder plugin is available by default but requires enabling [logical replication](https://neon.com/docs/guides/logical-replication-guide) in Neon. | | [xml2](https://neon.com/docs/extensions/xml2) | 1.1 | 1.1 | 1.1 | 1.1 | 1.2 | | ## Install an extension Unless otherwise noted, supported extensions can be installed using [CREATE EXTENSION](https://www.postgresql.org/docs/16/sql-createextension.html) syntax. ```sql CREATE EXTENSION ; ``` You can install extensions from the Neon SQL Editor or from a client such as `psql` that permits running SQL queries. For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Update an extension version Neon updates supported extensions as new versions become available. Version updates are communicated in the [Changelog](https://neon.com/docs/changelog). To check the current version of extensions you have installed, query the `pg_extension` table: ```bash SELECT * FROM pg_extension; ``` You can update an extension to the latest version using `ALTER EXTENSION UPDATE TO ` syntax. For example: ```sql ALTER EXTENSION vector UPDATE TO '0.7.0'; ``` **Important**: When Neon releases a new extension or new extension version, a compute restart is required to make the new extension or extension version available for installation or update. A compute restart may occur on its own due to Neon's default [scale to zero](https://neon.com/docs/introduction/scale-to-zero) behavior. However, if your compute never restarts because you disabled scale to zero or because your compute is constantly active, you may need to force a restart. To force a restart, you can issue [Restart endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) API call. Please be aware that restarting a compute temporarily interrupts any connections currently using the compute. Extensions installations and updates are automatically applied to any read replica computes on the same branch as your primary compute the next time the read replica compute restarts. ## Request an extension _We appreciate all extension requests. While we can't guarantee support, we regularly review requests and prioritize them based on factors like user demand, popularity in the Postgres ecosystem, and Neon's product direction. Some extensions are simple to add, while others require significant integration work._ ## Custom-built extensions For [Scale](https://neon.com/docs/introduction/plans) plan customers, Neon supports custom-built Postgres extensions for exclusive use with your Neon account. If you developed your own Postgres extension and want to use it with Neon, please reach out to us as described above. Please include the following information in your request: - A repository link or archive file containing the source code for your extension - A description of what the extension does, instructions for compiling it, and any prerequisites - Whether an NDA or licensing agreement is necessary for Neon to provide support for your extension Please keep in mind that certain restrictions may apply with respect to Postgres privileges and local file system access. Additionally, Neon features such as _Autoscaling_ and _Scale to Zero_ may limit the types of extensions we can support. Depending on the nature of your extension, Neon may also request a liability waiver. Custom-built extensions are not yet supported for Neon projects provisioned on Azure. ## Extension support Neon supports a large number of Postgres extensions. When we say an extension is "supported," we mean that it's available for you to enable and use in your project. We don't actively maintain third-party extension code. If you run into an issue or discover a bug with an extension, we recommend reporting it to the extension's upstream maintainers. If a fix is released, we're happy to update to the latest version of the extension. For the extension versions that Neon supports, refer to the [Supported extensions table](https://neon.com/docs/extensions/pg-extensions) above. You can request support for a new version of an extension by opening a [support ticket](https://console.neon.tech/app/projects?modal=support) or by reaching out to us on [Discord](https://discord.com/invite/92vNTzKDGp). ## Experimental extensions Neon occasionally adds support for extensions that are in early stages of development or undergoing validation. These extensions require an explicit opt-in and are not recommended for production use. To run these extensions, you'll be required to configure the following session variable before installing the extension: ```sql SET neon.allow_unstable_extensions = 'true'; ``` **Note**: "Unstable" doesn't mean the extension is buggy. It means that we have not yet met service level expectations for the extension, often related to testing and Neon integration requirements. **Things to know about experimental extensions:** - **Use with caution:** We recommend trying experimental extensions in a separate project—not in the Neon project you use for production. - **Limited support:** Experimental extensions aren't covered by Neon support. If an extension causes your database to fail or prevents it from starting, we'll help you disable it if possible—but we can't guarantee more than that. - **No guarantees:** An experimental extension might never become fully supported. It could require significant work from Neon or the extension's maintainers before it's ready for general use. - **Subject to change or removal:** Experimental extensions may be updated at any time, including breaking changes. They can also be removed—especially if they pose security or operational risks. If you're experimenting with an extension and run into trouble, we recommend checking with the extension's maintainers or community for support. ## Extensions with preloaded libraries A preloaded library in Postgres is a shared library that must be loaded into memory when the Postgres server starts. These libraries are specified in your Postgres server's startup configuration using the `shared_preload_libraries` parameter and cannot be added dynamically after the server has started. Some Postgres extensions require preloaded libraries but most do not. Neon Postgres servers preload libraries for certain extensions by default. You can view **currently enabled** libraries by running the following command. ```sql SHOW shared_preload_libraries; neon,pg_stat_statements,timescaledb,pg_cron,pg_partman_bgw,rag_bge_small_en_v15,rag_jina_reranker_v1_tiny_en ``` ### Viewing available libraries You can view **available** libraries by running [List preloaded libraries](https://api-docs.neon.tech/reference/getavailablepreloadlibraries) API: ```bash curl --request GET \ --url https://console.neon.tech/api/v2/projects/your_project_id/available_preload_libraries \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` The response body lists available libraries and whether the libraries are enabled by default. Response body attributes include: - `library_name` — library name, typically named for the associated extension - `description` — a description of the extension - `is_default` — whether the library is enabled by default - `is_experimental` — whether the extensions is [experimental](https://neon.com/docs/extensions/pg-extensions#experimental-extensions) - `version` — the extension version Details: Response body For attribute definitions, find the [List preloaded libraries](https://api-docs.neon.tech/reference/getavailablepreloadlibraries) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "libraries": [ { "library_name": "timescaledb", "description": "Enables scalable inserts and complex queries for time-series data.", "is_default": true, "is_experimental": false, "version": "2.17.1" }, { "library_name": "pg_cron", "description": "pg_cron is a cron-like job scheduler for PostgreSQL.", "is_default": true, "is_experimental": false, "version": "1.6.4" }, { "library_name": "pg_partman_bgw", "description": "pg_partman_bgw is a background worker for pg_partman.", "is_default": true, "is_experimental": false, "version": "5.1.0" }, { "library_name": "rag_bge_small_en_v15,rag_jina_reranker_v1_tiny_en", "description": "Shared libraries for pgrag extensions", "is_default": true, "is_experimental": true, "version": "0.0.0" }, { "library_name": "pgx_ulid", "description": "pgx_ulid is a PostgreSQL extension for ULID generation.", "is_default": false, "is_experimental": false, "version": "0.2.0" }, { "library_name": "pg_mooncake", "description": "Columnstore Table in Postgres", "is_default": false, "is_experimental": false, "version": "0.1.1" }, { "library_name": "pg_search", "description": "pg_search: Full text search for PostgreSQL using BM25", "is_default": false, "is_experimental": false, "version": "0.15.12" }, { "library_name": "anon", "description": "Anonymization & Data Masking for PostgreSQL", "is_default": false, "is_experimental": false, "version": "2.1.0" } ] } ``` Important notes about libraries for Postgres extensions: - Available libraries and those enabled by default may differ by Postgres version - Neon does not enable libraries for all extensions that have them ### Enabling preloaded libraries You can enable available libraries using the `preloaded_libraries` object in a [Create project](https://api-docs.neon.tech/reference/createproject) or [Update project](https://api-docs.neon.tech/reference/updateproject) API call. For example, this `Update project` call enables the specified libraries for the Neon project. When running this call, you have to provide a [Neon project ID](https://neon.com/docs/reference/glossary#project-id) and a [Neon API key](https://neon.com/docs/manage/api-keys). ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/ \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "settings": { "preload_libraries": { "enabled_libraries": [ "neon","pg_stat_statements","timescaledb","pg_cron","pg_partman_bgw","rag_bge_small_en_v15,rag_jina_reranker_v1_tiny_en","pg_search" ] } } } } ' ``` When running a `Create project` or `Update project` API call to enable libraries: - Library names must be quoted, comma-separated, and specified in a single string. - Specify all libraries that should be enabled. If a library is not included in the API call, it will not be enabled. - The "use_defaults": true`option overrides the`"enabled_libraries"` option, enabling only default libraries - The `neon` and `pg_stat_statements` libraries will remain enabled whether you include them in your API call or not — they're used by a Neon system-managed database. - If you do not use one of the libraries enabled by default, you can exclude it from your API call. For example, if you do not use the `pgrag` extension, you can exclude its libraries (`"rag_bge_small_en_v15,rag_jina_reranker_v1_tiny_en"`). ## Extension notes - Neon supports the `uuid-ossp` extension for generating UUIDs instead of the `uuid` extension. - The `sslinfo` extension is not supported. Neon handles connections via a proxy that checks SSL. - The `file_fdw` extension is not supported. Files would not remain accessible when Neon scales to zero. --- # Source: https://neon.com/llms/extensions-pg_cron.txt # The pg_cron extension > The document details the pg_cron extension for Neon, enabling users to schedule PostgreSQL commands directly within the database using cron syntax. ## Source - [The pg_cron extension HTML](https://neon.com/docs/extensions/pg_cron): The original HTML version of this documentation The `pg_cron` extension provides a simple, cron-based job scheduler for Postgres. It operates directly within your database, allowing you to schedule standard SQL commands or calls to stored procedures using familiar cron syntax. This eliminates the need for external cron utilities for many database maintenance and automation tasks. This guide provides an introduction to the `pg_cron` extension. You'll learn how to enable the extension, schedule jobs, understand the cron syntax, manage and monitor your scheduled tasks, and about considerations specific to the Neon environment. **Warning** Key details about using pg_cron with Neon: Please note that `pg_cron` jobs will only run when your compute is active. We therefore recommend only using `pg_cron` on computes that run 24/7 or where you have disabled [scale to zero](https://neon.com/docs/introduction/scale-to-zero). ## Enable the `pg_cron` extension To install `pg_cron` on Neon, you must first enable it by setting the `cron.database_name` parameter to the name of the database where you want to install `pg_cron`. This requires making an [Update compute endpoint](https://api-docs.neon.tech/reference/updateprojectendpoint) API call. The `cron.database_name` parameter is passed to your Postgres instance through the `pg_settings` option in the endpoint settings object. The following `Update endpoint` API example shows where to specify your Neon `project_id`, `endpoint_id`, [Neon API key](https://neon.com/docs/manage/api-keys), and database name. The `project_id` and `endpoint_id` values can be obtained from the Neon Console or [using the Neon API](https://api-docs.neon.tech/reference/path-parameters). In the Neon Console, the `project_id` is found on your project's **Settings** page, and will look something like this: `young-sun-12345678`. The `endpoint_id` is found on the **Compute** tab on your **Branches** page, where it is referred to as the **Endpoint ID**. It will have an `ep` prefix, and look similar to this: `ep-still-rain-abcd1234`. ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects//endpoints/ \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY$' \ --header 'content-type: application/json' \ --data ' { "endpoint": { "settings": { "pg_settings": { "cron.database_name": "your_dbname" } } } } ' ``` After setting `cron.database_name`, you must restart your compute to apply the new setting. You can do this using the [Restart compute endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) API. Specify the same `project_id` and `endpoint_id` used to set the `cron.database_name` parameter above. **Please note that restarting your compute endpoint will drop current connections to your database.** ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects//endpoints//restart \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` **Note**: The [Restart compute endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) API only works on an active compute. If your compute is idle, you can start it by running a query to wake it up or running the [Start compute endpoint](https://api-docs.neon.tech/reference/startprojectendpoint) API. For more information and other compute restart options, see [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). You can then install the `pg_cron` extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pg_cron; ``` If you have trouble with this setup, please reach out to [Neon Support](https://console.neon.tech/app/projects?modal=support) or find us on [Discord](https://discord.gg/92vNTzKDGp). ## `pg_cron` version availability Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Cron schedule syntax `pg_cron` uses the standard cron syntax, with the following fields: ``` ┌───────────── min (0 - 59) │ ┌────────────── hour (0 - 23) │ │ ┌─────────────── day of month (1 - 31) or last day of the month ($) │ │ │ ┌──────────────── month (1 - 12) │ │ │ │ ┌───────────────── day of week (0 - 6) (0 to 6 are Sunday to │ │ │ │ │ Saturday, or use names; 7 is also Sunday) │ │ │ │ │ │ │ │ │ │ * * * * * ``` You can use the following special characters: - `*`: Represents all values within the field. - `,`: Specifies a list of values (e.g., `1,3,5` for specific days). - `-`: Specifies a range of values (e.g., `10-12` for hours 10, 11, and 12). - `/`: Specifies step values (e.g., `*/15` in the minutes field means "every 15 minutes"). Additionally, `pg_cron` supports: - Interval scheduling using `'[1-59] seconds'` (e.g., `'5 seconds'`). - `'$'` to indicate the last day of the month. Remember that all schedules in `pg_cron` are interpreted in UTC. When scheduling jobs, ensure your cron expressions are set accordingly. You can use tools like [crontab.guru](http://crontab.guru/) and adjust for the UTC timezone. ## Schedule a job You can schedule jobs using the `cron.schedule()` function. The basic syntax involves providing a cron schedule string and the command to execute. Let's look at some examples to understand how to schedule jobs with `pg_cron`. ### Automating data archival Imagine you have an `orders` table and you want to archive orders older than 90 days to a separate `orders_archive` table every Sunday at 2:00 AM UTC to maintain performance on your main table. ```sql SELECT cron.schedule( 'archive-old-orders', '0 2 * * 0', -- Runs every Sunday at 2:00 AM UTC $$ WITH OldOrders AS ( SELECT * FROM orders WHERE order_date < NOW() - INTERVAL '90 days' ) INSERT INTO orders_archive SELECT * FROM OldOrders; DELETE FROM orders WHERE order_id IN (SELECT order_id FROM OldOrders); $$ ); ``` Here's a breakdown of the command: - `'archive-old-orders'`: This is the name you're giving to this scheduled job. It helps you identify and manage the job later. - `'0 2 * * 0'`: This is the cron schedule string. - `0`: The job will run when the minute is `0`. - `2`: The job will run when the hour is `2` (2 AM UTC). - `*`: The job will run every day of the month. - `*`: The job will run every month. - `0`: The job will run on Sunday (where 0 represents Sunday). Therefore, this job is scheduled to run at 2:00 AM UTC every Sunday. - `$$ ... $$`: This is a way to define a string literal in PostgreSQL, especially useful for multi-line commands. - `INSERT INTO orders_archive ...`: This is the SQL command that will be executed. It selects all rows from the `orders` table older than 90 days and inserts them into the `orders_archive` table. (A CTE is used to make sure the same rows are used for both the `INSERT` and `DELETE` commands.) - `DELETE FROM orders ...`: This command then deletes the archived orders from the main `orders` table. This example demonstrates how to automate a common database maintenance task, ensuring your main tables remain manageable and performant. ### Purging cron job logs The `cron.job_run_details` table keeps a record of your scheduled job executions. Over time, this table can grow and consume storage. Regularly purging older entries is a good practice to keep its size manageable. You can schedule a job using `pg_cron` itself to automatically delete old records from `cron.job_run_details`. Here's how you can schedule a job to purge entries older than seven days, running every day at midnight UTC: ```sql SELECT cron.schedule( 'purge-cron-history', '0 0 * * *', -- Runs every day at midnight UTC $$ DELETE FROM cron.job_run_details WHERE end_time < NOW() - INTERVAL '7 days'; $$ ); ``` Here's a breakdown of the command: - **purge-cron-history**: The name of the scheduled job for purging history. - '0 0 \* \* \*': The cron schedule, set to run at minute 0, hour 0 (midnight), every day of the month, every month, and every day of the week (all in UTC). - `DELETE FROM cron.job_run_details WHERE end_time < NOW() - INTERVAL '7 days'`: This is the SQL command that will be executed. It deletes all records from the `cron.job_run_details` table where the end_time is older than seven days from the current time. ### Running jobs every `n` seconds `pg_cron` also lets you schedule a job every `n` seconds, which is not possible with traditional cron jobs. Here `n` can be any value between 1 and 59 inclusive. For example, to run a job every 10 seconds, you can use the following command: ```sql SELECT cron.schedule('every-10-seconds', '10 seconds', 'SELECT 1'); ``` ## View scheduled jobs To see the jobs currently scheduled in your database, query the `cron.job` table: ```sql SELECT * FROM cron.job; ``` This will show you details like the job ID, schedule, command, and the user who scheduled it. ## Unschedule jobs You can remove scheduled jobs using the `cron.unschedule()` function, either by providing the job name or the job ID. ### Unschedule by name Let's say you want to unschedule the job we created earlier to archive old orders: ```sql SELECT cron.unschedule('archive-old-orders'); ``` ### Unschedule by ID You can also unschedule a job by providing the job ID: ```sql SELECT cron.unschedule(26); ``` ## View job run details The `cron.job_run_details` table provides information about the execution of scheduled jobs. ```sql SELECT * FROM cron.job_run_details ORDER BY start_time DESC LIMIT 5; ``` This table includes details like the job ID, run ID, execution status, start and end times, and any return messages. ## Running pg_cron jobs in multiple databases The `pg_cron` extension can only be installed in one database per Postgres cluster (each compute in a Neon project runs a Postgres instance, i.e., a Postgres cluster). If you need to schedule jobs in multiple databases, you can use the `cron.schedule_in_database()` function. This function allows you to create a cron job that runs in a specific database, even if `pg_cron` is installed in a different database. **Warning** Function not supported in Neon: The `cron.schedule_in_database()` function is currently not supported in Neon. ### Example: Scheduling a job in a different database To schedule a job in another database, use `cron.schedule_in_database()` and specify the target database name: ```sql SELECT cron.schedule_in_database( 'my_job', -- Job name '0 * * * *', -- Cron schedule (every hour) 'my_database', -- Target database 'VACUUM ANALYZE my_table' -- SQL command to run ); ``` In this example: - The job named `my_job` runs every hour `(0 * * * *)`. - It executes `VACUUM ANALYZE my_table` in `my_database`, even if `pg_cron` is installed in another database. ## Extension settings `pg_cron` has several configuration parameters that influence its behavior. These settings are managed by Neon and cannot be directly modified by users. Understanding these settings can be helpful for monitoring and troubleshooting. You can view the current configuration in your Neon database using the following query: ```sql SELECT * FROM pg_settings WHERE name LIKE 'cron.%'; ``` Here are a few key `pg_cron` settings and their descriptions: | Setting | Default | Description | | ----------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `cron.launch_active_jobs` | `on` | When set to `off`, this setting disables all active `pg_cron` jobs without requiring a server restart. | | `cron.log_min_messages` | `WARNING` | This setting determines the minimum severity level of log messages generated by the `pg_cron` launcher background worker. | | `cron.log_run` | `on` | When enabled (`on`), details of each job run are logged in the `cron.job_run_details` table. | | `cron.log_statement` | `on` | If enabled (`on`), the SQL command of each scheduled job is logged before execution. | | `cron.max_running_jobs` | `32` | This parameter defines the maximum number of `pg_cron` jobs that can run concurrently. | | `cron.timezone` | `GMT` | Specifies the timezone in which the `pg_cron` background worker operates. **Note:** Although this setting exists, `pg_cron` internally interprets all job schedules in UTC. Changing this parameter has no effect on how schedules are executed. | | `cron.use_background_workers` | `off` | When enabled (`on`), `pg_cron` uses background workers instead of direct client connections to execute jobs. This may require adjustments to the `max_worker_processes` PostgreSQL setting. | **Note** Important: Setting Modifications in Neon: It's important to note that because `pg_cron` is managed by Neon, modifying these settings requires superuser privileges. Therefore, you cannot directly alter these `pg_cron` configuration parameters yourself. If you have a specific need to adjust any of these settings, please [open a support ticket](https://console.neon.tech/app/projects?modal=support). **After Neon support implements the requested configuration change, you will need to [restart your Neon compute](https://neon.com/docs/manage/computes#restart-a-compute) for the new settings to take effect.** ## Conclusion You have successfully learned how to enable and use the `pg_cron` extension within your Neon Postgres environment. You can now schedule routine database tasks directly within your database, simplifying automation and maintenance. Remember that `pg_cron` schedules are interpreted in UTC and will only run when your compute is active. ## Resources - [pg_cron GitHub Repository](https://github.com/citusdata/pg_cron) - [crontab.guru](http://crontab.guru/) --- # Source: https://neon.com/llms/extensions-pg_graphql.txt # The pg_graphql extension > The document details the pg_graphql extension for Neon, enabling users to query PostgreSQL databases using GraphQL, facilitating seamless integration of GraphQL APIs with PostgreSQL data. ## Source - [The pg_graphql extension HTML](https://neon.com/docs/extensions/pg_graphql): The original HTML version of this documentation The `pg_graphql` extension adds a GraphQL API layer directly to your Postgres database. It introspects your SQL schema, tables, columns, relationships, and functions and automatically generates a corresponding GraphQL schema. This allows you to query your database using GraphQL through a single SQL function call, `graphql.resolve()`, eliminating the need for external GraphQL servers or middleware. With `pg_graphql`, you can leverage the flexibility of GraphQL for data fetching while keeping your data and API logic tightly coupled within Postgres. It respects existing Postgres roles ensuring data access remains secure and consistent. ## Enable the `pg_graphql` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pg_graphql; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Core concepts ### The `graphql.resolve()` function The `graphql.resolve()` function is the main entry point for executing GraphQL queries against your Postgres database. It acts as a bridge between your SQL schema and the GraphQL API. You pass your GraphQL query string (and optionally, variables and an operation name) to this function. It executes the query against the auto-generated GraphQL schema based on your database structure and returns the result as a JSONB object. **Basic signature:** ```sql graphql.resolve(query TEXT, variables JSONB DEFAULT '{}') RETURNS JSONB; ``` ### Schema reflection `pg_graphql` automatically creates a GraphQL schema from your SQL schema: - **Tables and views**: Become GraphQL object types. - **Columns**: Become fields on those types. - **Foreign keys**: Define relationships between types. - **Primary keys**: Essential for a table/view to be included. Each type gets a globally unique `nodeId: ID!` field. ### The `Node` interface `pg_graphql` implements the GraphQL Global Object Identification Specification. Every table type with a primary key implements the `Node` interface and gets a `nodeId: ID!` field. This `nodeId` is a globally unique, opaque identifier for a record, useful for client-side caching and refetching specific objects. ## Querying data (`Query` type) The `Query` type is the entry point for all read operations. ### Collections For each accessible table (e.g., `Book`), `pg_graphql` creates a collection field (e.g., `bookCollection`) on the `Query` type. Collections allow you to fetch multiple records and support pagination, filtering, and sorting. #### Basic collection fetch Create a `Book` table: ```sql CREATE TABLE "Book" ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, author TEXT, published_year INT ); INSERT INTO "Book" (title, author, published_year) VALUES ('The Great Gatsby', 'F. Scott Fitzgerald', 1925), ('To Kill a Mockingbird', 'Harper Lee', 1960), ('1984', 'George Orwell', 1949); ``` **Info** Inflection: To convert `snake_case` SQL names to `camelCase` (fields) / `PascalCase` (types) GraphQL names, use the `@graphql` comment directive on the schema: ```sql COMMENT ON SCHEMA public IS '@graphql({"inflect_names": true})'; ``` This will convert all table and column names to their GraphQL equivalents. For example: - `book` table becomes `Book` type - `book_collection` becomes `bookCollection` field - `book_authors` table becomes `BookAuthors` type - `published_at` column becomes `publishedAt` field - `published_year` column becomes `publishedYear` field It is optional to use this directive, but it is recommended for consistency and readability. The guide uses the inflected names for clarity. Learn more about Inflection in the [pg_graphql documentation](https://supabase.github.io/pg_graphql/configuration/#inflection). #### Fetch all books To fetch all books, use the `bookCollection` field on the `Query` type. The result is a connection type with `edges` and `node` fields. Run the following SQL query to fetch all books: ```sql SELECT graphql.resolve($$ query GetAllBooks { bookCollection { edges { node { id title author } } } } $$); ``` ```json { "data": { "bookCollection": { "edges": [ { "node": { "id": 1, "title": "The Great Gatsby", "author": "F. Scott Fitzgerald" } }, { "node": { "id": 2, "title": "To Kill a Mockingbird", "author": "Harper Lee" } }, { "node": { "id": 3, "title": "1984", "author": "George Orwell" } } ] } } } ``` #### Pagination Use `first` to limit results and `after` with a cursor for pagination. ```sql SELECT graphql.resolve($$ query PaginateBooks { bookCollection(first: 1) { # Get the first book edges { cursor # Use this cursor for the 'after' argument next time node { title } } pageInfo { endCursor hasNextPage } } } $$); ``` ```json { "data": { "bookCollection": { "edges": [{ "node": { "title": "The Great Gatsby" }, "cursor": "" }], "pageInfo": { "endCursor": "", "hasNextPage": true } } } } ``` To get the next page, you'd take `endCursor` from the `pageInfo` and use it as the `after` argument in a subsequent query: `bookCollection(first: 1, after: "opaqueCursorString")`. #### Filtering Use the `filter` argument. Filterable fields and operators (`eq`, `gt`, `lt`, `contains`, `and`, `or`, `not`) are generated based on column types. Find books by George Orwell published after 1940: ```sql SELECT graphql.resolve($$ query FilteredBooks { bookCollection(filter: { and: [ { author: { eq: "George Orwell" } }, { publishedYear: { gt: 1940 } } ] }) { edges { node { title publishedYear } } } } $$); ``` ```json { "data": { "bookCollection": { "edges": [{ "node": { "title": "1984", "published_year": 1949 } }] } } } ``` #### Sorting Use the `orderBy` argument. The `orderBy` clause takes a list of fields to sort by, each with a direction. Common direction enums are `AscNullsFirst`, `AscNullsLast`, `DescNullsFirst`, and `DescNullsLast`. ```sql SELECT graphql.resolve($$ query SortedBooks { bookCollection(orderBy: [{ publishedYear: DescNullsLast }]) { edges { node { title publishedYear } } } } $$); ``` ```json { "data": { "bookCollection": { "edges": [ { "node": { "title": "To Kill a Mockingbird", "publishedYear": 1960 } }, { "node": { "title": "1984", "publishedYear": 1949 } }, { "node": { "title": "The Great Gatsby", "publishedYear": 1925 } } ] } } } ``` ## Modifying data (`Mutation` type) The `Mutation` type is the entry point for write operations. ### Inserting records Use `insertIntoCollection`. ```sql SELECT graphql.resolve($$ mutation AddNewBook { insertIntoBookCollection( objects: [{ title: "Brave New World", author: "Aldous Huxley", publishedYear: 1932 }] ) { affectedCount records { # Returns the inserted records id title } } } $$); ``` ```json { "data": { "insertIntoBookCollection": { "records": [{ "id": 4, "title": "Brave New World" }], "affectedCount": 1 } } } ``` ### Updating records Use `update
Collection`. Requires a `filter` to specify which records, a `set` clause for new values, and `atMost` as a safety limit. ```sql SELECT graphql.resolve($$ mutation UpdateBookTitle { updateBookCollection( filter: { id: { eq: 1 } }, set: { title: "The Great Gatsby (Revised Edition)" }, atMost: 1 ) { affectedCount records { id title } } } $$); ``` ```json { "data": { "updateBookCollection": { "records": [{ "id": 1, "title": "The Great Gatsby (Revised Edition)" }], "affectedCount": 1 } } } ``` ### Deleting records Use `deleteFrom
Collection`. Requires a `filter` and `atMost`. ```sql SELECT graphql.resolve($$ mutation DeleteBook { deleteFromBookCollection( filter: { id: { eq: 1 } }, atMost: 1 ) { affectedCount records { # Returns the deleted records id title } } } $$); ``` ```json { "data": { "deleteFromBookCollection": { "records": [{ "id": 1, "title": "The Great Gatsby (Revised Edition)" }], "affectedCount": 1 } } } ``` ## Relationships `pg_graphql` automatically infers relationships from foreign key constraints. **Example: Authors and Books** ```sql CREATE TABLE "Author" ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE "Book" ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, author_id INT REFERENCES "Author"(id) -- Foreign key ); INSERT INTO "Author" (name) VALUES ('George Orwell'); INSERT INTO "Book" (title, author_id) VALUES ('1984', 1), ('Animal Farm', 1); ``` Query books and their author: ```sql SELECT graphql.resolve($$ query BooksWithAuthors { bookCollection { edges { node { title author { # Field for related Author name } } } } } $$); ``` ```json { "data": { "bookCollection": { "edges": [ { "node": { "title": "1984", "author": { "name": "George Orwell" } } }, { "node": { "title": "Animal Farm", "author": { "name": "George Orwell" } } } ] } } } ``` Query authors and their books: ```sql SELECT graphql.resolve($$ query AuthorsWithBooks { authorCollection { edges { node { name bookCollection { # Collection of related Books edges { node { title } } } } } } } $$); ``` ```json { "data": { "authorCollection": { "edges": [ { "node": { "name": "George Orwell", "bookCollection": { "edges": [{ "node": { "title": "1984" } }, { "node": { "title": "Animal Farm" } }] } } } ] } } } ``` ## Computed fields You can add fields that are not directly stored columns. ### Postgres generated columns ```sql CREATE TABLE "User" ( id SERIAL PRIMARY KEY, first_name TEXT, last_name TEXT, full_name TEXT GENERATED ALWAYS AS (first_name || ' ' || last_name) STORED ); INSERT INTO "User" (first_name, last_name) VALUES ('John', 'Doe'); ``` `full_name` will automatically appear in the `User` GraphQL type. ```sql SELECT graphql.resolve($$ query UserFullName { userCollection { edges { node { firstName lastName fullName # Computed field } } } } $$); ``` ```json { "data": { "userCollection": { "edges": [{ "node": { "lastName": "Doe", "firstName": "John", "fullName": "John Doe" } }] } } } ``` ### SQL functions For more complex logic, create an SQL function that takes the table's row type as input. ```sql CREATE FUNCTION get_user_initials(u "User") RETURNS TEXT STABLE LANGUAGE SQL AS $$ SELECT substr(u.first_name, 1, 1) || substr(u.last_name, 1, 1); $$; ``` This would (by default) add a `getUserInitials` field to the `User` type. Naming can be customized. **Example:** ```sql SELECT graphql.resolve($$ query UserInitials { userCollection { edges { node { firstName lastName getUserInitials # Custom field } } } } $$); ``` ```json { "data": { "userCollection": { "edges": [{ "node": { "lastName": "Doe", "firstName": "John", "getUserInitials": "JD" } }] } } } ``` ## Configuration via comment directives Customize `pg_graphql` behavior using comments on SQL objects. Format: `COMMENT ON ... IS '@graphql({"key": "value"})';` ### Renaming You can rename tables, columns, and types in the GraphQL schema using the `@graphql` directive. ```sql COMMENT ON TABLE "Book" IS '@graphql({"name": "Publication"})'; -- Book table -> Publication type COMMENT ON COLUMN "Book".title IS '@graphql({"name": "headline"})'; -- Book.title -> Publication.headline ``` ```sql SELECT graphql.resolve($$ query RenamedTypes { publicationCollection { edges { node { headline } } } } $$); ``` ```json { "data": { "publicationCollection": { "edges": [{ "node": { "headline": "1984" } }, { "node": { "headline": "Animal Farm" } }] } } } ``` ### Descriptions You can add descriptions to tables, columns, and types using the `@graphql` directive. This is useful for documentation and introspection. ```sql COMMENT ON TABLE "Book" IS '@graphql({"description": "Represents a literary work."})'; ``` ```sql SELECT graphql.resolve($$ query BookDescription { __type(name: "Book") { description } } $$); ``` ```json { "data": { "__type": { "description": "Represents a literary work." } } } ``` ### `totalCount` on collections Enable the `totalCount` field on a connection type. ```sql COMMENT ON TABLE "Book" IS '@graphql({"totalCount": {"enabled": true}})'; ``` Now `bookCollection` will have `totalCount`. ```sql SELECT graphql.resolve($$ query BookTotalCount { bookCollection { totalCount } } $$); ``` ```json { "data": { "bookCollection": { "totalCount": 2 } } } ``` ## Views and foreign tables Views (and materialized views, foreign tables) can be exposed if they have a "virtual" primary key defined via a comment directive: ```sql CREATE VIEW "NewUsers" AS SELECT * FROM "User"; -- optional WHERE clause as per the view definition COMMENT ON VIEW "NewUsers" IS '@graphql({"primary_key_columns": ["id"]})'; ``` Now `NewUsers` will be queryable via GraphQL. ```sql SELECT graphql.resolve($$ query NewUsers { newUsersCollection { edges { node { id firstName lastName } } } } $$); ``` ```json { "data": { "newUsersCollection": { "edges": [{ "node": { "id": 1, "lastName": "Doe", "firstName": "John" } }] } } } ``` ## Security considerations `pg_graphql` fully respects Postgres's native security: - **Role permissions**: A user querying via `pg_graphql` can only see/interact with tables, columns, and functions they have SQL permissions for. If a role lacks `SELECT` on a table, that table won't appear in their GraphQL schema. - **Row-Level Security (RLS)**: All RLS policies are automatically applied. While this guide provides a solid foundation, `pg_graphql` offers a rich set of advanced features not covered here. For a deeper dive into capabilities like exposing complex SQL functions as queries or mutations, advanced filtering techniques including nested logical operators and array operations, fine-tuning schema generation with more comment directives (e.g., for computed relationships on views or custom naming for all elements), handling transactions, performance optimization strategies, and detailed guides for integrating with client libraries like Apollo and Relay, please refer to the official [`pg_graphql` documentation](https://supabase.github.io/pg_graphql/). ## Conclusion `pg_graphql` offers an efficient way to generate a GraphQL API directly from your Postgres database. By understanding its schema reflection, the `graphql.resolve()` function, and basic configuration, you can quickly expose your data for flexible querying without needing an external GraphQL server. ## Resources - [`pg_graphql` official documentation](https://supabase.github.io/pg_graphql/) - [GraphQL official documentation](https://graphql.org/learn/) --- # Source: https://neon.com/llms/extensions-pg_mooncake.txt # The pg_mooncake extension > The document details the pg_mooncake extension for Neon, which enables users to manage and manipulate JSON data within PostgreSQL databases efficiently. ## Source - [The pg_mooncake extension HTML](https://neon.com/docs/extensions/pg_mooncake): The original HTML version of this documentation The [pg_mooncake](https://github.com/Mooncake-Labs/pg_mooncake) extension enables fast analytic workloads in Postgres by adding native columnstore tables and vectorized execution (DuckDB). Columnstore tables improve analytical queries by storing data vertically, enabling compression and efficient column-specific retrieval with vectorized execution. `pg_mooncake` columnstore tables are designed so that only metadata is stored in Postgres, while data is stored in an object store as Parquet files with [Iceberg](https://iceberg.apache.org/)or [Delta Lake](https://delta.io/) metadata. Queries on `pg_mooncake` columnstore tables are executed by DuckDB. The extension is maintained by [Mooncake Labs](https://www.mooncake.dev/). You can create and use `pg_mooncake` columnstore tables like regular Postgres heap tables to run: - Transactional `INSERT`, `SELECT`, `UPDATE`, `DELETE`, and `COPY` operations - Joins with regular Postgres tables In addition, you can: - Load Parquet, CSV, and JSON files into columnstore tables - Load Hugging Face datasets - Run DuckDB specific aggregate functions like `approx_count_distinct` - Read existing Iceberg and Delta Lake tables - Write Delta Lake tables from Postgres tables **Note**: `pg_mooncake` is an open-source extension for Postgres that can be installed on any Neon Project using the instructions below. ## Use cases for pg_mooncake `pg_mooncake` supports several use cases, including: 1. Analytics on Postgres data 2. Time Series & Log Analytics 3. Exporting Postgres tables to your Lake or Lakehouse 4. Querying and updating existing Lakehouse tables and Parquet files directly in Postgres This guide provides a quickstart to the `pg_mooncake` extension. ## Enable the extension **Note**: The `pg_mooncake` extension is currently in Beta and classified as experimental in Neon. A separate, dedicated Neon project is recommended when using an extension that is still in Beta. For additional guidance, see [Experimental extensions](https://neon.com/docs/extensions/pg-extensions#experimental-extensions). While the `pg_mooncake` extension is in Beta, you need to explicitly allow it to be used on Neon before you can install it. To do so, connect to your Neon database via an SQL client like [psql](https://neon.com/docs/connect/query-with-psql-editor) or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) and run the `SET` command shown below. ```sql SET neon.allow_unstable_extensions='true'; ``` Install the extension: ```sql CREATE EXTENSION pg_mooncake; ``` ## Set up your object store Run the commands outlined in the following steps on your Neon database to setup your object store. _If you don't have an object storage bucket, you can get a free S3 express bucket [here](https://s3.pgmooncake.com/). When using the free s3 bucket, the `SELECT` and `SET` statements defined below are generated for you, which you can copy and run._ Add your object storage credentials. In this case, S3: ```sql SELECT mooncake.create_secret('', 'S3', '', '', '{"REGION": ""}'); ``` Set your default bucket: ```sql SET mooncake.default_bucket = 's3://'; ``` **Note** R2 and GCP buckets also supported: The `pg_mooncake` extension also supports R2 and GCP buckets. For set up instructions, refer to **pg_mooncake's** [cloud storage docs](https://pgmooncake.com/docs/cloud-storage). In the future, you will not have to bring your own bucket to use `pg_mooncake` with Neon. ## Create a columnstore table with `USING columnstore` Run the following SQL statement on your Neon database to create a columnstore table: ```sql CREATE TABLE reddit_comments( author TEXT, body TEXT, controversiality BIGINT, created_utc BIGINT, link_id TEXT, score BIGINT, subreddit TEXT, subreddit_id TEXT, id TEXT ) using columnstore; ``` ## Load data You can find a list of data sources [here](https://pgmooncake.com/docs/load-data). This dataset has 13 million rows and may take a few minutes to load. ```sql INSERT INTO reddit_comments (SELECT author, body, controversiality, created_utc, link_id, score, subreddit, subreddit_id, id FROM mooncake.read_parquet('hf://datasets/fddemarco/pushshift-reddit-comments/data/RC_2012-01.parquet') AS (author TEXT, body TEXT, controversiality BIGINT, created_utc BIGINT, link_id TEXT, score BIGINT, subreddit TEXT, subreddit_id TEXT, id TEXT)); ``` ## Query the table Queries on columnstore tables are executed by DuckDB. For example, this aggregate query runs in ~200 milliseconds on 13 million rows: ```sql -- Top commenters (excluding [deleted] users) SELECT author, COUNT(*) as comment_count, AVG(score) as avg_score, SUM(score) as total_score FROM reddit_comments WHERE author != '[deleted]' GROUP BY author ORDER BY comment_count DESC LIMIT 10; ``` ## References - [Repository](https://github.com/Mooncake-Labs/pg_mooncake) - [Documentation](https://pgmooncake.com/docs) - [Architecture](https://www.mooncake.dev/blog/how-we-built-pgmooncake) - [YouTube demo](https://youtu.be/QDNsxw_3ris?feature=shared&t=2048) --- # Source: https://neon.com/llms/extensions-pg_partman.txt # The pg_partman extension > The document details the pg_partman extension for Neon, which facilitates automated partition management in PostgreSQL databases, enhancing data organization and performance. ## Source - [The pg_partman extension HTML](https://neon.com/docs/extensions/pg_partman): The original HTML version of this documentation `pg_partman` is a Postgres extension that simplifies the management of partitioned tables. Partitioning refers to splitting a single table into smaller pieces called `partitions`. This is done based on the values in a key column or set of columns. Even though partitions are stored as separate physical tables, the partitioned table can still be queried as a single logical table. This can significantly enhance query performance and also help you manage the data lifecycle of tables that grow very large. While Postgres natively supports partitioning a table, `pg_partman` helps set up and manage partitioned tables: - **Automated partition creation**: `pg_partman` automatically creates new partitions as new records are inserted, based on a specified interval for the partition key. - **Automated maintenance**: `pg_partman` bundles a background worker process that manages maintenance tasks without needing an external scheduler or cron job. For example, it can automatically detach old partitions from the main table based on a retention policy, run `analyze` on partitions to update statistics, and more. In this guide, we'll learn how to set up and use the `pg_partman` extension with your Neon Postgres project. We'll cover why partitioning is helpful, how to enable `pg_partman`, creating partitioned tables, and automating partition maintenance. **Note**: `pg_partman` is an open-source Postgres extension that can be installed in any Neon project using the instructions below. Detailed installation instructions and compatibility information can be found in the [pg_partman](https://github.com/pgpartman/pg_partman) documentation. ## Enable the `pg_partman` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. Creatig a `partman` schema is optional (but recommended) and you can name the schema whatever you like, but it cannot be changed after installation. ```sql CREATE SCHEMA partman; CREATE EXTENSION pg_partman SCHEMA partman; ``` The `pg_partman` extension does not require a superuser to run, but it's recommended to create a dedicated role for running `pg_partman` functions and to act as the owner of all partition sets that `pg_partman` will maintain. Here is a sample SQL script to create a dedicated role with the minimum required privileges, assuming that `pg_partman` is installed to the `partman` schema and the dedicated role is named `partman_user`: ```sql CREATE ROLE partman_user WITH LOGIN; ALTER ROLE partman_user WITH PASSWORD '{PASSWORD_FOR_PARTMAN_USER}'; GRANT ALL ON SCHEMA partman TO partman_user; GRANT ALL ON ALL TABLES IN SCHEMA partman TO partman_user; GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA partman TO partman_user; GRANT EXECUTE ON ALL PROCEDURES IN SCHEMA partman TO partman_user; GRANT ALL ON SCHEMA '{WORKING_SCHEMA_NAME}' TO partman_user; GRANT TEMPORARY ON DATABASE '{WORKING_DATABASE_NAME}' to partman_user; -- allow creation of temp tables to move data out of default ``` If the role needs to create schemas, you'll have to grant `CREATE` on the database as well. This is only required if you give the role above the `CREATE` privilege on pre-existing schemas that will contain partition sets. ```sql GRANT CREATE ON DATABASE '{WORKING_DATABASE_NAME}' TO partman_user; ``` When you create a new `Neon` project, the default database name is `neondb` and the default schema name is `public`. Replace `{WORKING_DATABASE_NAME}` and `{WORKING_SCHEMA_NAME}` with the actual database and schema names you want to manage the partitioned tables in. To find out more about the privileges needed to run `pg_partman`, refer to the [pg_partman documentation](https://github.com/pgpartman/pg_partman). For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). **Version Compatibility:** `pg_partman` works with Postgres 14 and above, complementing the native partitioning features introduced in these versions. ## Why partition your data? For tables that grow very large, partitioning offers several benefits: - **Faster queries:** Partitioning allows Postgres to quickly locate and retrieve data within a specific partition, rather than scanning the entire table. - **Scalability:** Partitioning makes database administration simpler. For example, smaller partitions are easier to load and delete or back up and recover. - **Managing the data lifecycle:** Easier management of the data lifecycle by archiving or purging old partitions, which can be moved to cheaper storage options without affecting the active dataset. ### Native partitioning vs pg_partman Postgres supports partitioning tables natively, with the following strategies to divide the data: - **List partitioning**: Data is distributed across partitions based on a list of values, such as a category or location. - **Range partitioning**: Data is distributed across partitions based on ranges of values, such as dates or numerical ranges. With native partitioning, you need to manually create and manage partitions for your table. ```sql CREATE TABLE measurement ( city_id int not null, logdate date not null, peaktemp int ) PARTITION BY RANGE (logdate); -- Create a partition for each month of logged data. -- Records with `logdate` in this range are automatically routed to this partition table CREATE TABLE measurement_y2006m02 PARTITION OF measurement FOR VALUES FROM ('2006-02-01') TO ('2006-03-01'); -- Moving older data to a different table. -- Queries against the main table will not include the data in the detached partition ALTER TABLE measurement DETACH PARTITION measurement_y2005m10; ``` `pg_partman` supports creating partitions that are number or time-based, with each partition covering a range of values. It is particularly useful when partitions need to be created automatically as new records come in. So, list partitioning isn't applicable since the partition key values are not known in advance. ## Example: Partitioning user-activity data Consider a social media platform that tracks user interactions in their website application, such as likes, comments, and shares. The data is stored in a table called `user_activities`, where `activity_type` stores the type of activity and the other columns store additional information about the activity. ### Setting up a new partitioned table Given the large volume of data generated by user interactions, partitioning the `user_activities` table can help keep queries manageable. Recent activity data is typically the most interesting for both the platform and its users, so `activity_time` is a good candidate to partition on. We can create the partitioned table using the following SQL statement, similar to defining a native partitioned table: ```sql CREATE TABLE user_activities ( activity_id serial, activity_time TIMESTAMPTZ NOT NULL, activity_type TEXT NOT NULL, content_id INT NOT NULL, user_id INT NOT NULL ) PARTITION BY RANGE (activity_time); ``` To create a partition for each week of activity data, you can run the following query: ```sql SELECT create_parent('public.user_activities', 'activity_time', '1 week'); ``` This will create a new partition for each week of data in the `user_activities` table. We can insert some sample data into the table: ```sql INSERT INTO user_activities (activity_time, activity_type, content_id, user_id) VALUES ('2024-03-15 10:00:00', 'like', 1001, 101), ('2024-03-16 15:30:00', 'comment', 1002, 102), ('2024-03-17 09:45:00', 'share', 1003, 103), ('2024-03-18 18:20:00', 'like', 1004, 104), ('2024-03-19 12:10:00', 'comment', 1005, 105), ('2024-03-20 08:00:00', 'like', 1006, 106), ('2024-03-21 14:15:00', 'share', 1007, 107), ('2024-03-22 11:30:00', 'like', 1008, 108), ('2024-03-23 16:45:00', 'comment', 1009, 109), ('2024-03-24 20:00:00', 'share', 1010, 110), ('2024-03-25 09:30:00', 'like', 1011, 111), ('2024-03-26 13:45:00', 'comment', 1012, 112), ('2024-03-27 17:00:00', 'share', 1013, 113), ('2024-03-28 11:15:00', 'like', 1014, 114), ('2024-03-29 15:30:00', 'comment', 1015, 115); ``` ### Querying partitioned tables We can query against the `user_activities` table as if it were a single table, and Postgres will automatically route the query to the correct partition(s) based on the `activity_time` column. ```sql SELECT * FROM user_activities WHERE activity_time BETWEEN '2024-03-20' AND '2024-03-25'; ``` This query returns the following results: ```text activity_id | activity_time | activity_type | content_id | user_id -------------+------------------------+---------------+------------+--------- 16 | 2024-03-20 08:00:00+00 | like | 1006 | 106 17 | 2024-03-21 14:15:00+00 | share | 1007 | 107 18 | 2024-03-22 11:30:00+00 | like | 1008 | 108 19 | 2024-03-23 16:45:00+00 | comment | 1009 | 109 20 | 2024-03-24 20:00:00+00 | share | 1010 | 110 (5 rows) ``` To see the list of all partitions created for the `user_activities` table, you can run the following query: ```sql SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' AND table_name LIKE 'user_activities_%'; ``` This will return the following results: ```text table_name --------------------------- user_activities_p20240329 user_activities_p20240405 user_activities_p20240315 user_activities_p20240322 user_activities_p20240412 user_activities_p20240419 user_activities_p20240426 user_activities_default user_activities_p20240301 user_activities_p20240308 (10 rows) ``` `pg_partman` automatically created tables for weekly intervals close to the current data. As more data is inserted, it will create new partitions. Additionally, there is a `user_activities_default` table that stores data that doesn't fit into any of the existing partitions. ### Data retention policies To make sure that old data is automatically removed from the main table, you can set up a retention policy: ```sql UPDATE part_config SET retention = '4 weeks', retention_keep_table = true WHERE parent_table = 'public.user_activities'; ``` The background worker process that comes bundled with `pg_partman` automatically detaches the old partitions that are older than 4 weeks from the main table. Since we've set `retention_keep_table` to `true`, the old partitions are kept as separate tables, and not dropped from the database. ## Additional considerations ### Partitioning an existing table with `pg_partman` If you have an existing table that you want to partition, you can use `pg_partman` for it. However, it isn't straightforward since it can't be directly altered into the parent table for a partition set. Instead, you need to create a new partitioned table and copy the data from the existing table into the new partitioned table. We describe the `offline` method here, where queries to the existing table are stopped while the data is being copied over to the new partitioned table. It is also possible to achieve this while keeping the existing table operational, but it involves more complex steps. For more details, refer to the [pg_partman documentation](https://github.com/pgpartman/pg_partman/blob/master/doc/pg_partman_howto.md). #### Example: Partitioning an existing table To illustrate, we recreate the `test_user_activities` table from the previous example but without specifying partitioning: ```sql CREATE TABLE public.test_user_activities ( activity_id serial, activity_time TIMESTAMPTZ NOT NULL, activity_type TEXT NOT NULL, content_id INT NOT NULL, user_id INT NOT NULL ); INSERT INTO test_user_activities (activity_time, activity_type, content_id, user_id) VALUES ('2024-03-15 10:00:00', 'like', 1001, 101), ('2024-03-16 15:30:00', 'comment', 1002, 102), ('2024-03-17 09:45:00', 'share', 1003, 103), ('2024-03-18 18:20:00', 'like', 1004, 104), ('2024-03-19 12:10:00', 'comment', 1005, 105), ('2024-03-20 08:00:00', 'like', 1006, 106), ('2024-03-21 14:15:00', 'share', 1007, 107), ('2024-03-22 11:30:00', 'like', 1008, 108), ('2024-03-23 16:45:00', 'comment', 1009, 109), ('2024-03-24 20:00:00', 'share', 1010, 110), ('2024-03-25 09:30:00', 'like', 1011, 111), ('2024-03-26 13:45:00', 'comment', 1012, 112), ('2024-03-27 17:00:00', 'share', 1013, 113), ('2024-03-28 11:15:00', 'like', 1014, 114), ('2024-03-29 15:30:00', 'comment', 1015, 115); ``` Now, we'll partition the existing `test_user_activities` table using `pg_partman`. 1. Rename the original table so that the partitioned table can be created with the original table's name: ```sql ALTER TABLE public.test_user_activities RENAME TO old_user_activities; ``` 2. Create a new table with the same name as the original table, but with partitioning enabled: ```sql CREATE TABLE public.test_user_activities ( activity_id serial, activity_time TIMESTAMPTZ NOT NULL, activity_type TEXT NOT NULL, content_id INT NOT NULL, user_id INT NOT NULL ) PARTITION BY RANGE (activity_time); ``` We were using a `SERIAL` column for `activity_id` in the original table. If you want to keep the same sequence for the new table, you can set the sequence value to the last value of the original table: ```sql SELECT setval('public.test_user_activities_activity_id_seq', (SELECT MAX(activity_id) FROM public.old_user_activities)); ``` In general, we also need to ensure other properties from the old table, such as privileges, constraints, defaults, indexes, etc. are also applied to the new table. 3. Use the `create_parent()` function provided by `pg_partman` to set up partitioning on the new table: ```sql SELECT partman.create_parent( p_parent_table := 'public.test_user_activities', p_control := 'activity_time', p_interval := '1 week' ); ``` 4. Now, to we can migrate data from the old table to the new partitioned table in smaller batches: ```sql CALL partman.partition_data_proc( p_parent_table := 'public.test_user_activities', p_loop_count := 200, p_interval := '1 day', p_source_table := 'public.old_user_activities' ); ``` This will move the data from `old_user_activities` to the new `test_user_activities` table in daily intervals, committing after each batch. The `p_interval` parameter specifies the interval of values to select in each batch, and `p_loop_count` specifies the total number of batches to move. 5. After the data migration is complete, the old table should be empty, and the new partitioned table should contain all the data and child tables. You can verify this by counting the number of rows in both the tables: ```sql SELECT COUNT(*) FROM public.test_user_activities UNION ALL SELECT COUNT(*) FROM public.old_user_activities; ``` This should return 15 and 0 rows, respectively. 6. Finally, run `VACUUM ANALYZE` on the new partitioned table to update statistics: ```sql VACUUM ANALYZE public.test_user_activities; ``` The `test_user_activities` table is now successfully partitioned using `pg_partman`, with the data migrated from the old table to the new partitioned structure. ### Uniqueness constraints for partitioned tables This section applies to partitioned tables created natively in Postgres, as well as those created using `pg_partman`. Postgres doesn't support indexes or unique constraints that span multiple tables. Since a partitioned table is made up of multiple physical tables, you can't create a unique constraint that spans all the partitions. For example, the following query will fail: ```sql ALTER TABLE user_activities ADD CONSTRAINT unique_activity UNIQUE (activity_id); ``` It returns the following error: ```text ERROR: unique constraint on partitioned table must include all partitioning columns DETAIL: UNIQUE constraint on table "user_activities" lacks column "activity_time" which is part of the partition key. ``` However, when the unique constraint involves partition key columns, Postgres can guarantee uniqueness across all partitions. In this way, different partitions cannot share the same values for the partition key columns, which allows unique constraints to be enforced. For example, including the `activity_time` column in the unique constraint will work because `activity_time` is a partition key column: ```sql ALTER TABLE user_activities ADD CONSTRAINT unique_activity UNIQUE (activity_id, activity_time); ``` ## Conclusion By leveraging `pg_partman`, you can significantly enhance the native partitioning functionality of Postgres, particularly for large-scale and time-series datasets. The extension simplifies partition management, automates retention and archival tasks, and improves query performance. ## Reference - [pg_partman Documentation](https://github.com/pgpartman/pg_partman) - [PostgreSQL Partitioning Documentation](https://www.postgresql.org/docs/current/ddl-partitioning.html) --- # Source: https://neon.com/llms/extensions-pg_prewarm.txt # The pg_prewarm extension > The document details the pg_prewarm extension for Neon, which facilitates the loading of relation data into the PostgreSQL buffer cache to improve query performance by preloading data into memory. ## Source - [The pg_prewarm extension HTML](https://neon.com/docs/extensions/pg_prewarm): The original HTML version of this documentation You can use the `pg_prewarm` extension to preload data into the Postgres buffer cache after a restart. Doing so improves query response times by ensuring that your data is readily available in memory. Otherwise, data must be loaded into the buffer cache from disk on-demand, which can result in slower query response times. In this guide, we'll explore the `pg_prewarm` extension, how to enable it, and how to use it to prewarm your Postgres buffer cache. **Note**: The `pg_prewarm` extension is open-source and can be installed on any Postgres setup. Detailed information about the extension is available in the [PostgreSQL Documentation](https://www.postgresql.org/docs/current/pgprewarm.html). **Version availability** Please refer to the [list of extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for information about the version of `pg_prewarm` that Neon supports. ## Enable the `pg_prewarm` extension Enable the `pg_prewarm` extension by running the `CREATE EXTENSION` statement in your Postgres client: ```sql CREATE EXTENSION IF NOT EXISTS pg_prewarm; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Basic usage To prewarm a specific table, simply use the `pg_prewarm` function with the name of the table you want to cache. ```sql SELECT pg_prewarm('table_name'); ``` Replace `table_name` with the actual name of your table. The output of `SELECT pg_prewarm()` is the number of blocks from the specified table that was loaded into the Postgres buffer cache. The default block size in Postgres is 8192 bytes (8KB). The `pg_prewarm` function does not support specifying multiple table names in a single command. It's designed to work with a single table at a time. If you want to prewarm multiple tables, you would need to call `pg_prewarm` separately for each. ## Running pg_prewarm on indexes Running `pg_prewarm` on frequently-used indexes can help improve query performance after a Postgres restart. You might also run `pg_prewarm` on indexes that are not frequently used but will be involved in upcoming heavy read operations. Running `pg_prewarm` on an index is similar to running it on a table, but you specify the index's fully qualified name (schema name plus index name) or OID (Object Identifier) instead. Here's an example that demonstrates how to use `pg_prewarm` to preload an index into memory: ```sql SELECT pg_prewarm('schema_name.index_name'); ``` Replace `schema_name.index_name` with the actual schema and index name you want to prewarm. If you're not sure about the index name or want to list all indexes for a specific table, you can use the `pg_indexes` view to find out. Here's how you might query for index names: ```sql SELECT indexname FROM pg_indexes WHERE tablename = 'your_table_name'; ``` Replace `your_table_name` with the name of the table whose indexes you're interested in. Once you have the index name, you can then use `pg_prewarm` as shown above. Additionally, if you prefer to use the index's OID, you can find it using the `pg_class` system catalog. Here's how to find an index's OID: ```sql SELECT oid FROM pg_class WHERE relname = 'index_name'; ``` Then, you can use the OID with `pg_prewarm` like so: ```sql SELECT pg_prewarm(your_index_oid); ``` ## Check the proportion of a table loaded into memory In this example, you create a table, check its data size, run `pg_prewarm`, and then check to see how much of the table's data was loaded into memory. 1. First, create a table and populate it with some data: ```sql CREATE TABLE t_test AS SELECT * FROM generate_series(1, 1000000) AS id; ``` 2. Check the size of the table: ```sql SELECT pg_size_pretty(pg_relation_size('t_test')) AS table_size_pretty, pg_relation_size('t_test') AS table_size_bytes; ``` This command returns the size of the table in both MB and bytes. ```sql table_size_pretty | table_size_bytes -------------------+------------------ 35 MB | 36700160 ``` 3. Load the table data into the Postgres buffer cache using `pg_prewarm`: ```sql SELECT pg_prewarm('public.t_test') AS blocks_loaded; ``` This will output the number of blocks that were loaded: ```sql blocks_loaded --------------- 4480 ``` 4. To understand the calculation that follows, check the block size of your Postgres instance: ```sql SHOW block_size; ``` The default block size in Postgres is 8192 bytes (8KB). We'll use this value in the next step. ```sql block_size ------------ 8192 ``` 5. Calculate the total size of the data loaded into the cache using the block size and the number of blocks loaded: ```sql -- Assuming 4480 blocks were loaded (replace with your actual number from pg_prewarm output) SELECT 4480 * 8192 AS loaded_data_bytes; ``` You can now compare this value with the size of your table. ```sql loaded_data_bytes ------------------- 36700160 ``` **Note**: The values for the size of the table and the size of the data loaded into the buffer cache as shown in the example above match exactly, which is an ideal scenario. However, there are cases where these values might not match, indicating that not all the data was loaded into the buffer cache; for example, this can happen if `pg_prewarm` only partially loads the table into the buffer cache due to lack of memory availability. Concurrent data modifications could also cause sizes to differ. To understand how much memory is available to your Postgres instance on Neon, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). ## Demonstrating the effect of pg_prewarm This example shows how preloading data can improve query performance. We'll create two tables with the same data, preload one table, and then run `EXPLAIN ANALYZE` to compare execution time results. 1. Create two sample tables with the same data for comparison: ```sql CREATE TABLE tbl_transactions_1 ( tran_id_ SERIAL, transaction_date TIMESTAMPTZ, transaction_name TEXT ); INSERT INTO tbl_transactions_1 (transaction_date, transaction_name) SELECT x, 'dbrnd' FROM generate_series('2010-01-01 00:00:00'::timestamptz, '2018-02-01 00:00:00'::timestamptz, '1 minutes'::interval) a(x); ``` ```sql CREATE TABLE tbl_transactions_2 ( tran_id_ SERIAL, transaction_date TIMESTAMPTZ, transaction_name TEXT ); INSERT INTO tbl_transactions_2 (transaction_date, transaction_name) SELECT x, 'dbrnd' FROM generate_series('2010-01-01 00:00:00'::timestamptz, '2018-02-01 00:00:00'::timestamptz, '1 minutes'::interval) a(x); ``` 2. Restart your Postgres instance to clear the cache. On Neon, you can do this by [restarting your compute](https://neon.com/docs/manage/computes#restart-a-compute). 3. Prewarm the first sample table: ```sql SELECT pg_prewarm('tbl_transactions_1') AS blocks_loaded; ``` This will output the number of blocks that were loaded into the cache: ```sql blocks_loaded --------------- 27805 ``` 4. Now, compare the execution plan of the prewarmed table vs. a non-prewarmed table to see the performance improvement. ```sql EXPLAIN ANALYZE SELECT * FROM tbl_transactions_1; ``` ```sql EXPLAIN ANALYZE SELECT * FROM tbl_transactions_2; ``` The execution time for the prewarmed table should be significantly lower than for the table that has not been prewarmed, as shown here: ```sql EXPLAIN ANALYZE SELECT * FROM tbl_transactions_1; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------- Seq Scan on tbl_transactions_1 (cost=0.00..69608.21 rows=4252321 width=18) (actual time=0.017..228.995 rows=4252321 loops=1) Planning Time: 1.134 ms Execution Time: 344.028 ms (3 rows) EXPLAIN ANALYZE SELECT * FROM tbl_transactions_2; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Seq Scan on tbl_transactions_2 (cost=0.00..69608.21 rows=4252321 width=18) (actual time=2.251..11859.232 rows=4252321 loops=1) Planning Time: 0.216 ms Execution Time: 11994.066 ms (3 rows) ``` ## Conclusion Prewarming your table data and indexes can help improve read performance, especially after a database restart or for indexes that are not frequently used but will be involved in upcoming heavy read operations. However, it's important to use this feature cautiously, especially on systems with limited memory, to avoid potential negative impacts on overall performance. ## Resources - [PostgreSQL pg_prewarm documentation](https://www.postgresql.org/docs/current/pgprewarm.html) - [How to size your compute in Neon](https://neon.com/docs/manage/computes#how-to-size-your-compute) --- # Source: https://neon.com/llms/extensions-pg_repack.txt # The pg_repack extension > The document details the pg_repack extension for Neon, which enables users to reorganize tables and indexes in PostgreSQL databases without requiring downtime. ## Source - [The pg_repack extension HTML](https://neon.com/docs/extensions/pg_repack): The original HTML version of this documentation Postgres, like any database system, can accumulate bloat over time due to frequent updates and deletes. Bloat refers to wasted space within your tables and indexes, which can lead to decreased query performance and increased storage usage. `pg_repack` is a powerful Postgres extension that allows you to efficiently remove this bloat by rewriting tables and indexes online, with minimal locking. Unlike `VACUUM FULL` or `CLUSTER`, `pg_repack` avoids exclusive locks, ensuring your applications remain available during the reorganization process. This guide provides an introduction to the `pg_repack` extension and how to leverage it within your Neon database. You'll learn how to install and use `pg_repack` to reclaim disk space and improve database performance by removing bloat from your tables and indexes. ## Enable the `pg_repack` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pg_repack; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Understanding `pg_repack` and bloat Before using `pg_repack`, it's helpful to understand what causes bloat and how `pg_repack` addresses it. ### What is Bloat? In Postgres, when rows in a table are updated or deleted, the space they occupied isn't immediately reclaimed. Instead, Postgres uses a mechanism called Multi-Version Concurrency Control (MVCC). While MVCC is essential for concurrency and transactional integrity, it can lead to **dead tuples** – outdated row versions that are no longer needed but still occupy space. Indexes also become bloated as they point to these dead tuples or become fragmented over time. This unused space is known as bloat. ### Why remove bloat? Bloat can negatively impact your database in several ways: - **Reduced query performance:** Postgres has to scan through bloated tables and indexes, increasing I/O operations and slowing down queries. - **Increased storage usage:** Bloat consumes disk space, leading to higher storage costs in the long run, especially in a cloud environment like Neon. - **Inefficient vacuuming:** While regular `VACUUM` helps, it doesn't fully reclaim space from bloat. `VACUUM FULL` does, but it requires an exclusive lock, causing downtime. ### How `pg_repack` works `pg_repack` provides an online solution to defragment tables and indexes. It works by creating a new copy of the table and indexes, efficiently copying data from the original table to the new one, and then atomically replacing the old table with the new one. Key features of `pg_repack` include: - **Online operation:** It operates without requiring exclusive locks for most of the process, minimizing downtime. - **Minimal locking:** Only short `ACCESS EXCLUSIVE` locks are needed at the beginning and end of the repack process. - **Bloat removal:** Effectively removes bloat from both tables and indexes, reclaiming disk space and improving performance. - **Reordering options:** Allows you to optionally reorder table rows based on a clustered index or specified columns, further optimizing data access. - **Index repack:** You can repack indexes independently of the table, which can be useful for index-specific bloat issues. **Important**: `pg_repack` requires the target table to have a `PRIMARY KEY` or at least a `UNIQUE` index on a `NOT NULL` column. Ensure your table meets this requirement before running `pg_repack`. ## Understanding `pg_repack` Syntax While `psql` allows you to run commands directly within the SQL environment, `pg_repack` is a command-line tool that you execute from your terminal. If you haven't installed it yet, you'll find installation instructions on the [pg_repack GitHub repository](https://reorg.github.io/pg_repack/#download). The general syntax is as follows: **Note**: Make sure to install the correct version of `pg_repack` that is being used in your Neon environment. Currently, Neon uses `pg_repack` version 1.5.2 ```bash pg_repack [OPTIONS]... [DBNAME] ``` Let's break down the key components: - **`pg_repack`**: This is the command itself, invoking the `pg_repack` executable. Ensure that `pg_repack` is installed and accessible in your system's `PATH`. - **`[OPTIONS]...`**: These are command-line options that modify the behavior of `pg_repack`. Options are typically provided in the format `--option-name=value` or `-short-option value`. You can specify multiple options to customize the repack operation. - **`[DBNAME]`**: This is the name of the Postgres database you want to connect to. You can also specify the database connection details using connection options (see below), in which case you might omit `DBNAME` here. ### Common `pg_repack` options `pg_repack` offers a variety of options to control its behavior. Here are some of the most commonly used options: ### Reorganization options - **`-t TABLE`, `--table=TABLE`**: Specifies the table to be reorganized. You can reorganize multiple tables by using this option multiple times (e.g., `-t table1 -t table2`). By default, all eligible tables in the target databases are reorganized. - **`-I TABLE`, `--parent-table=TABLE`**: Reorganize both the specified table(s) and its inheritors. - **`-c SCHEMA`, `--schema=SCHEMA`**: Repacks all eligible tables within the specified schema(s). - **`-o COLUMNS [,...]`, `--order-by=COLUMNS [,...]`**: Reorders the table rows based on the specified column(s). This performs an online `CLUSTER`. - **`-n`, `--no-order`**: Performs an online `VACUUM FULL` instead of a `CLUSTER` operation, even for clustered tables. This is the default for non-clustered tables since `pg_repack` 1.2. - **`-x`, `--only-indexes`**: Repacks only the indexes of the specified table(s). Requires using `-t` or `-I` to specify the target table. - **`-i INDEX`, `--index=INDEX`**: Repacks only the specified index. You can specify multiple indexes with multiple `-i` options. - **`-j NUM`, `--jobs=NUM`**: Uses multiple parallel jobs (connections) to speed up index rebuilding. Useful for servers with multiple CPU cores and sufficient I/O capacity. - **`-N`, `--dry-run`**: Performs a "dry run," listing the actions `pg_repack` _would_ take without actually executing them. Useful for previewing the operation. - **`-Z`, `--no-analyze`**: Skips running `ANALYZE` on the repacked table(s) at the end of the process. By default, `pg_repack` runs `ANALYZE`. - **`-k`, `--no-superuser-check`**: **Crucially important for Neon!** Skips the superuser check. You must use this option when running `pg_repack` against Neon, as Neon users are not superusers. ### Connection options These options specify how `pg_repack` connects to your database. You can often omit the `DBNAME` from the main command if you provide these connection options. - **`-d DBNAME`, `--dbname=DBNAME`**: Specifies the database name to connect to. - **`-h HOSTNAME`, `--host=HOSTNAME`**: Specifies the hostname of your Neon endpoint. You can find this by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. - **`-p PORT`, `--port=PORT`**: Specifies the port. For Neon, this is always `5432`. - **`-U USERNAME`, `--username=USERNAME`**: Specifies your Neon username. You can find this by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. - **`-W`, `--password`**: Forces `pg_repack` to prompt for your password. ### Generic options - **`-e`, `--echo`**: Prints the SQL commands executed by `pg_repack` to the terminal. Useful for debugging or understanding the process. - **`-E LEVEL`, `--elevel=LEVEL`**: Sets the output message level (e.g., `DEBUG`, `INFO`, `WARNING`, `ERROR`). Defaults to `INFO`. - `--help`: Displays help information about `pg_repack` and its options. - `--version`: Displays the version of `pg_repack`. ## Key use cases for `pg_repack` `pg_repack` is a versatile tool that can address various performance and maintenance challenges. Here are some common use cases where `pg_repack` can be beneficial: ### Reclaim space from bloated tables Over time, tables can accumulate bloat from updates and deletes, wasting storage and impacting performance. `pg_repack` rewrites tables to remove dead rows and reclaim unused space, similar to `VACUUM FULL`, but crucially, **without blocking write operations**. This is essential for maintaining application availability. ```bash -- Repack a single table (performs online VACUUM FULL) pg_repack --no-order --table orders; ``` ### Reorder data by an index for optimized queries If you frequently query your data based on a specific index, physically reordering the table rows according to that index can significantly improve query performance. This is similar to the `CLUSTER` command, but `pg_repack` performs this reordering **online**, minimizing disruption. ```bash -- Reorder the 'orders' table by 'order_date' in descending order pg_repack --table orders --order-by "order_date DESC"; ``` ### Rebuild indexes online to improve scan performance Indexes can become fragmented over time, leading to less efficient index scans. `pg_repack` can rebuild indexes **online**, creating fresh, optimized indexes to improve query performance without locking the table for writes. ```bash -- Rebuild all indexes of the 'orders' table pg_repack --table orders --only-indexes; ``` ### Example syntax Here are a few examples of how to use `pg_repack` with different options: ### Basic repack of a table ```bash pg_repack -k -h -p 5432 -d -U --table your_table_name ``` ### Reordering a table by a column ```bash pg_repack -k -h -p 5432 -d -U --table your_table_name --order-by "indexed_column DESC" ``` ### Repacking only indexes of a table ```bash pg_repack -k -h -p 5432 -d -U --table your_table_name --only-indexes ``` ### Dry run to preview repack operations ```bash pg_repack -k -N -h -p 5432 -d -U --table your_table_name ``` ## Using `pg_repack` to reorganize tables Let's walk through a practical example of using `pg_repack` to reorganize a table in your Neon database. ### Connect to your Neon Database Ensure you are connected to your Neon database using [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor). You can find your connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ### Create a sample table with bloat (Optional) For demonstration purposes, let's create a sample table and introduce some bloat. If you already have a table you want to repack, you can skip this step. ```sql CREATE TABLE public.bloated_table ( id SERIAL PRIMARY KEY, data TEXT ); -- Insert some initial data INSERT INTO public.bloated_table (data) SELECT md5(random()::text) FROM generate_series(1, 100000); -- Delete a significant portion of the data to simulate bloat DELETE FROM public.bloated_table WHERE id % 2 = 0; ``` ### Verify table size before `pg_repack` Let's check the size of the table before running `pg_repack`. You can use the `\dt+` command in `psql` or query `pg_relation_size` in SQL. In `psql`: ```psql \dt+ bloated_table ``` Or in SQL: ```sql SELECT pg_size_pretty(pg_relation_size('bloated_table')); ``` Note the size of the table before repack. ### Run `pg_repack` Now, execute the `pg_repack` command from your terminal ```bash pg_repack -k -h -p 5432 -d -U --table bloated_table ``` Replace the placeholders with your Neon connection details. - `-h `: Your Neon hostname. - `-p 5432`: The port (always 5432 for Neon Postgres). - `-d `: Your Neon database name. - `-U `: Your Neon username. - `--table bloated_table`: Specifies the table to repack. You will be prompted to enter your Neon password after running the command. ### Verify table size after `pg_repack` After `pg_repack` completes successfully, check the table size again using the same command as before. You should observe a reduction in the table size, indicating that `pg_repack` has successfully removed bloat. **Example Output (Size Reduction)** Before `pg_repack`: ```text Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+-----------------+-------+--------------+-------------+---------------+----------+------------- public | bloated_table | table | neondb_owner | permanent | heap | 8192 kB | (1 row) ``` After `pg_repack`: ```text Schema | Name | Type | Owner | Persistence | Access method | Size | Description --------+-----------------+-------+--------------+-------------+---------------+----------+------------- public | bloated_table | table | neondb_owner | permanent | heap | 4096 kB | (1 row) ``` In this example, the table size was reduced from 8MB to 4MB after running `pg_repack`. The actual size reduction will depend on the amount of bloat present in your table. ## Best Practices and Considerations While `pg_repack` generally works seamlessly with Neon, here are a few things to keep in mind: - **`-k` / `--no-superuser-check` flag:** Always use the `-k` / `--no-superuser-check` flag when running `pg_repack` against your Neon database. - **Disk space:** `pg_repack` requires temporary disk space roughly double the size of the table being repacked. Ensure you have sufficient storage for your Neon Project. - **Resource usage:** While `pg_repack` is designed to be online, it does consume resources (CPU, I/O) during operation. Consider running it during off-peak hours for very resource-intensive operations, especially on production databases. ## Conclusion `pg_repack` is an invaluable tool for maintaining the health and performance of your Neon Postgres database. By enabling you to remove bloat online and with minimal locking, it helps ensure your database remains efficient, responsive, and cost-effective. Regularly using `pg_repack`, especially on tables with frequent updates and deletes, can help you reclaim disk space, improve query performance, and optimize your database. ## References - [pg_repack GitHub Repository](https://github.com/reorg/pg_repack) - [pg_repack Documentation on PGXN](https://pgxn.org/dist/pg_repack/) - [Investigating Postgres Query Performance](https://neon.com/blog/postgres-support-recap-investigating-postgres-query-performance) --- # Source: https://neon.com/llms/extensions-pg_search.txt # The pg_search extension > The document details the pg_search extension for Neon, explaining how it enhances PostgreSQL's full-text search capabilities by integrating with various search-related functions and configurations. ## Source - [The pg_search extension HTML](https://neon.com/docs/extensions/pg_search): The original HTML version of this documentation The `pg_search` extension by [ParadeDB](https://www.paradedb.com/) adds functions and operators to Postgres that use [BM25 (Best Matching 25)](https://en.wikipedia.org/wiki/Okapi_BM25) indexes for efficient, high-relevance text searches. It supports standard SQL syntax and JSON query objects, offering features similar to those in Elasticsearch. `pg_search` eliminates the need to integrate external search engines, simplifying your architecture and providing real-time search functionality that's tightly coupled with your transactional data. In this guide, you'll learn how to enable `pg_search` on Neon, understand the fundamentals of BM25 scoring and inverted indexes, and explore hands-on examples to create indexes and perform full-text searches on your Postgres database. **Note** pg_search on Neon: `pg_search` is currently only available on Neon projects created in an [AWS region](https://neon.com/docs/introduction/regions#aws-regions). It is not yet supported on Neon projects created in Azure regions. ## Enable the `pg_search` extension Tab: Postgres 17 Install the `pg_search` extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pg_search; ``` Tab: Postgres 14 - 16 The `pg_search` extension is supported on Postgres 14–16 for Neon projects in AWS regions. Contact Neon support to enable it for your project. ## Understanding text search with `pg_search` `pg_search` enables text searching within your Postgres database, helping you find rows containing specific keywords or phrases in text columns. Unlike basic `LIKE` queries, `pg_search` offers advanced scoring, relevance ranking, and language handling to deliver more accurate and context-aware search results. It also addresses major performance limitations of native Postgres full-text search (FTS) by using a **BM25 covering index**, which indexes text along with metadata (numeric, datetime, JSON, etc.), enabling complex boolean, aggregate, and ordered queries to be processed significantly faster—often reducing query times from minutes to seconds. Key features include: - **Advanced relevance ranking:** Orders search results by relevance, incorporating phrase, regex, fuzzy matching, and other specialized FTS queries. - **Powerful indexing with flexible tokenization:** Supports multiple tokenizers (e.g., ICU, Lindera) and token filters (e.g., language-aware stemmers), improving search accuracy across different languages. - **Hybrid search:** Combines BM25 scores with `pgvector` embeddings to enhance search experiences. - **Faceted search:** Allows categorization and filtering of search results based on query parameters. - **Expressive query builder:** Provides an Elastic DSL-like query syntax for constructing complex search queries. By leveraging these features, `pg_search` enhances both performance and flexibility, making full-text search in Postgres more efficient and developer-friendly. ### BM25: The Relevance scoring algorithm `pg_search` utilizes the [**BM25 (Best Matching 25)**](https://en.wikipedia.org/wiki/Okapi_BM25) algorithm, a widely adopted ranking function by modern search engines, to calculate relevance scores for full-text search results. BM25 considers several factors to determine relevance: - **Term Frequency (TF):** How often a search term appears in a row's text. More occurrences suggest higher relevance. - **Inverse Document Frequency (IDF):** How common or rare your search term is across all rows. Less common words often indicate more specific results. - **Document Length Normalization:** BM25 adjusts for text length, preventing longer rows from automatically seeming more relevant. BM25 assigns a relevance score to each row, with higher scores indicating better matches. ### Inverted Index for efficient searching For fast searching, `pg_search` uses an **inverted index**. Think of it as an index in the back of a book, but instead of mapping topics to page numbers, it maps words (terms) to the database rows (documents) where they appear. This index structure lets `pg_search` quickly find rows containing your search terms without scanning every table row, greatly speeding up queries. With these basics in mind, let's learn how to create a BM25 index and start performing full-text searches with `pg_search` on Neon. ## Getting started with `pg_search` `pg_search` has a special operator, `@@@`, that you can use in SQL queries to perform full-text searches. This operator allows you to search for specific words or phrases within text columns, returning rows that match your search criteria. You can also sort results by relevance and highlight matched terms. Let us create a sample table, set up a BM25 index, and run some search queries to explore `pg_search` in action. ### Creating a sample table for text search To demonstrate how `pg_search` functions, we'll begin by creating a sample table named `mock_items` and populating it with example data. ParadeDB provides a convenient tool to generate a test table with sample data for experimentation. First, connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Once connected, execute the following SQL command: ```sql CALL paradedb.create_bm25_test_table( schema_name => 'public', table_name => 'mock_items' ); ``` It will generate a table named `mock_items`, which include the columns: `id`, `description`, `rating`, and `category`, which we will utilize in our search examples. Let's examine the initial items within our newly created `mock_items` table. Run the following SQL query: ```sql SELECT description, rating, category FROM mock_items LIMIT 3; ``` The output will display the first three rows from the `mock_items` table: ```text description | rating | category --------------------------+--------+------------- Ergonomic metal keyboard | 4 | Electronics Plastic Keyboard | 4 | Electronics Sleek running shoes | 5 | Footwear (3 rows) ``` Next, let's create our first search index, named `item_search_idx`, on the `mock_items` table. This index will enable searching across the `id`, `description`, and `category` columns. It's necessary to designate one column as the `key_field`; we will use `id` for this purpose. The `key_field` serves as a unique identifier for each item within the index. **Note** Key Field Selection: It is crucial to select a column that consistently contains a unique value for every row. This ensures the search index operates as intended. Run the following SQL command to create the `item_search_idx` index: ```sql CREATE INDEX item_search_idx ON mock_items USING bm25 (id, description, category) WITH (key_field='id'); ``` This will create a BM25 index on the `mock_items` table, enabling us to search within the `id`, `description`, and `category` columns. The `key_field` parameter specifies that the `id` column serves as the unique identifier for each row in the index. Now that we have our `item_search_idx` index, let's explore some searches using the `@@@` operator in our SQL queries. ### Simple keyword search Let's begin by finding all items where the `description` contains the word **'shoes'**. Run the following SQL query: ```sql SELECT description, category FROM mock_items WHERE description @@@ 'shoes'; ``` This query will locate all rows in `mock_items` where the `description` column includes the word **'shoes'**. ```text description | category ----------------------+---------- Sleek running shoes | Footwear White jogging shoes | Footwear Generic shoes | Footwear (3 rows) ``` ### Searching for exact phrases To search for a specific phrase, enclose it in double quotes. Let's find items where the `description` contains the exact phrase **"metal keyboard"**: ```sql SELECT description, category FROM mock_items WHERE description @@@ '"metal keyboard"'; ``` This search will exclusively find rows that contain the exact phrase **"metal keyboard"**. ```text description | category --------------------------+------------ Ergonomic metal keyboard | Electronics (1 row) ``` If we remove the double quotes, the search will find rows containing both **'metal'** and **'keyboard'**, but the words are not required to be adjacent. ```sql SELECT description, category FROM mock_items WHERE description @@@ 'metal keyboard'; ``` The output is: ```text description | category --------------------------+------------ Ergonomic metal keyboard | Electronics Plastic Keyboard | Electronics (2 rows) ``` ### Advanced search options #### paradedb.match: Similar word search and keyword matching The `paradedb.match` function is used for keyword searches and for finding words similar to your search term, even with typos. For example, to find items similar to **'running shoes'**, use: ```sql SELECT description, category FROM mock_items WHERE id @@@ paradedb.match('description', 'running shoes'); ``` ```text description | category -----------------------+---------- Sleek running shoes | Footwear White jogging shoes | Footwear Generic shoes | Footwear (3 rows) ``` You can also use `paradedb.match` with JSON syntax. For instance, to find items with a description similar to **'running shoes'**: ```sql SELECT description, category FROM mock_items WHERE id @@@ '{"match": {"field": "description", "value": "running shoes"}}'::jsonb; ``` #### Searching with typos: Fuzzy matching To retrieve results even with minor errors in the search term, you can use `paradedb.match` with the `distance` option. Suppose you mistyped **'running'** as **'runing'**. You can still find relevant results using fuzzy matching: ```sql SELECT description, category FROM mock_items WHERE id @@@ paradedb.match('description', 'runing', distance => 1); ``` This will find items where the `description` is similar to **'runing'** within a [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) of 1. ```text description | category -------------------------+---------- Sleek running shoes | Footwear (1 rows) ``` #### paradedb.phrase: Searching for phrases with words nearby The `paradedb.phrase` function, combined with the `slop` option, helps you find phrases even if the words are not immediately adjacent. The `slop` value specifies the number of intervening words allowed. A `slop` of 1 permits one extra word in between. ```sql SELECT description, category FROM mock_items WHERE id @@@ paradedb.phrase('description', ARRAY['white', 'shoes'], slop => 1); ``` This query will find rows where **'white'** and **'shoes'** are within one word or less of each other. ```text description | category -------------------------+---------- White jogging shoes | Footwear (1 rows) ``` ### Sorting search results by relevance To ensure the most relevant results are displayed first, you can sort your search results by relevance. Utilize `paradedb.score()` with `ORDER BY` to achieve this: ```sql SELECT description, category, rating, paradedb.score(id) FROM mock_items WHERE description @@@ 'shoes' ORDER BY paradedb.score(id) DESC; ``` This query will find items matching **'shoes'** and then present them in order from most to least relevant based on their search score (BM25 relevance score). ```text description | category | rating | score ----------------------+----------+--------+------------- Generic shoes | Footwear | 4 | 2.8772602 Sleek running shoes | Footwear | 5 | 2.4849067 White jogging shoes | Footwear | 3 | 2.4849067 (3 rows) ``` ### Highlighting search results To highlight matched terms in the search results, you can use the `paradedb.snippet()` function. This function generates snippets of text containing the matched words, making it easier to identify relevant content. ```sql SELECT id, paradedb.snippet(description) FROM mock_items WHERE description @@@ 'shoes' LIMIT 3; ``` This will provide snippets of the `description` where the words matching your search are wrapped in `` tags by default. This visual cue makes the matched terms stand out when results are displayed in your application. ```text id | snippet ----+----------------------------------- 3 | Sleek running shoes 4 | White jogging shoes 5 | Generic shoes (3 rows) ``` If you prefer different tags, you can customize the tags using the `start_tag` and `end_tag` options with `paradedb.snippet()`. For example: ```sql SELECT id, paradedb.snippet(description, start_tag => '', end_tag => '') FROM mock_items WHERE description @@@ 'shoes' LIMIT 3; ``` This will wrap the matched words in `` and `` tags instead of the default `` and ``. ```text id | snippet ----+----------------------------------- 3 | Sleek running shoes 4 | White jogging shoes 5 | Generic shoes (3 rows) ``` ### Combining search words with `AND/OR` To create more complex searches, you can use `OR` and `AND` operators to combine keywords. For instance, to retrieve items with **'shoes'** in the `description` OR **'Electronics'** in the `category`, you can use: ```sql SELECT description, category FROM mock_items WHERE description @@@ 'shoes' OR category @@@ 'Electronics' LIMIT 3; ``` This will find items that satisfy either of these conditions. ```text description | category --------------------------+------------ Ergonomic metal keyboard | Electronics Plastic Keyboard | Electronics Sleek running shoes | Footwear (3 rows) ``` ### Query builder functions In addition to query strings, query builder functions can be used to compose various types of complex queries. For a list of supported query builder functions, refer to ParadeDB's [Query Builder](https://docs.paradedb.com/documentation/advanced/overview) documentation. ### Joined search with multiple tables `pg_search` supports full-text search over JOINs, which is crucial for database schemas that store data in a normalized fashion. Let's create a table called `orders` that references our `mock_items` table: ```sql CALL paradedb.create_bm25_test_table( schema_name => 'public', table_name => 'orders', table_type => 'Orders' ); ALTER TABLE orders ADD CONSTRAINT foreign_key_product_id FOREIGN KEY (product_id) REFERENCES mock_items(id); SELECT * FROM orders LIMIT 3; ``` Next, let's create a BM25 index over the `orders` table: ```sql CREATE INDEX orders_idx ON orders USING bm25 (order_id, customer_name) WITH (key_field='order_id'); ``` Now we can perform a search across both tables using a JOIN. The following query searches for rows where `customer_name` matches 'Johnson' and `description` matches 'shoes': ```sql SELECT o.order_id, o.customer_name, m.description FROM orders o JOIN mock_items m ON o.product_id = m.id WHERE o.customer_name @@@ 'Johnson' AND m.description @@@ 'shoes' ORDER BY order_id LIMIT 5; ``` This demonstrates how `pg_search` can be used to search across related tables, allowing for powerful queries that combine data from multiple sources. ## Performance optimizations for `pg_search` To optimize `pg_search` performance, adjust both Postgres and `pg_search` settings for indexing and query speed. `pg_search` parameter names start with `paradedb`. You can configure both Postgres and `pg_search` settings for the current session using `SET`. ### Index build time Optimize index build time with these settings. The `maintenance_work_mem` setting is typically only one requiring tuning. The other two setting have proven default values that typically do not require modification. - **`maintenance_work_mem`**: : Sets the maximum amount of memory used for maintenance operations such as `CREATE INDEX`. Increasing this setting can speed up index builds by improving Write-Ahead Log (WAL) performance. For example, on a 100-million-row table, allocating multiple GBs can reduce index build time from hours to minutes. In Neon, `maintenance_work_mem` is set based on your compute size. You can increase it for the current session. Do not exceed 50–60% of your compute's available RAM. See [Neon parameter settings by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). ```sql SET maintenance_work_mem = '10 GB'; ``` - **`paradedb.create_index_memory_budget`**: Defines the memory per indexing thread before writing index segments to disk. The default is 1024 MB (1 GB). Large tables may need a higher value. If set to `0`, the budget is derived from `maintenance_work_mem` and `paradedb.create_index_parallelism`. ```sql SET paradedb.create_index_memory_budget = 2048; ``` - **`paradedb.create_index_parallelism`**: Controls the number of threads used during `CREATE INDEX`. The default is `0`, which automatically detects the available parallelism of your Neon compute. You can explicitly set: ```sql SET paradedb.create_index_parallelism = 8; ``` For more information about optimizing BM25 index size, see [ParadeDB — Index Size](https://docs.paradedb.com/documentation/configuration/index_size). ### Throughput **Note**: Most users will not need to adjust these advanced throughput settings. Tune `INSERT/UPDATE/COPY` throughput for the BM25 index with these settings: - **`paradedb.statement_parallelism`**: Controls indexing threads during `INSERT/UPDATE/COPY`. Default is `0` (auto-detects parallelism). - Use `1` for single-row atomic inserts/updates to avoid unnecessary threading. - Use a higher value for bulk inserts and updates. ```sql SET paradedb.statement_parallelism = 1; ``` - **`paradedb.statement_memory_budget`**: Memory per indexing thread before writing to disk. Default is 1024 MB (1 GB). Higher values may improve indexing performance. See [ParadeDB — Statement Memory Budget](https://docs.paradedb.com/documentation/configuration/write#statement-memory-budget). - If set to `0`, `maintenance_work_mem / paradedb.statement_parallelism` is used. - For single-row updates, 15 MB prevents excess memory allocation. - For bulk inserts/updates, increase as needed. ```sql SET paradedb.statement_memory_budget = 15; ``` ### Search performance Search performance can benefit from parallel workers, more memory provided by larger Neon compute sizes, and preloading indexes into memory. #### Parallel workers Increase parallel workers to speed up indexing: - **`max_worker_processes`**: Controls total worker processes across all connections. ```sql SET max_worker_processes = 8; ``` - **`max_parallel_workers`**: Defines the number of workers available for parallel scans. ```sql SET max_parallel_workers = 8; ``` - **`max_parallel_workers_per_gather`**: Limits parallel workers per query. The default in Neon is `2`, but you can adjust. The total number of parallel workers should not exceed your Neon compute's vCPU count. See [Neon parameter settings by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). `sql SET max_parallel_workers_per_gather = 8; ` #### Keeping indexes in memory Keeping indexes in memory improves query performance by reducing disk access. In Postgres, `shared_buffers` defines the buffer cache size, which determines how much memory is allocated for caching data. In Neon, this value is set automatically based on your compute size. In addition to `shared_buffers`, **Neon's Local File Cache (LFC)** extends memory up to 75% of your compute's RAM. This allows frequently accessed indexes and data to remain in memory, improving performance. Both `shared_buffers` and the LFC size depend on your compute size. For details, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). To further optimize performance, you can use the Postgres `pg_prewarm` extension to preload indexes into memory. This ensures fast query response times by warming up the cache after index creation or a restart of your Neon compute. To run `pg_prewarm` on an index: ```sql pg_prewarm('index_name') ``` For additional details, see [Running pg_prewarm on indexes](https://neon.com/docs/extensions/pg_prewarm#running-pgprewarm-on-indexes). ## Best practices for using `pg_search` To optimize your search functionality and ensure efficient performance, consider the following best practices when using `pg_search`: - **Analyze query plans:** Use `EXPLAIN` to analyze query plans and identify potential bottlenecks. - **Index all relevant columns:** Include all columns used in search queries, sorting, or filtering for optimal performance. - **Utilize query builder functions:** Leverage query builder functions or JSON syntax for complex queries like fuzzy matching and phrase matching. ## Conclusion You have successfully learned how to enable and utilize the `pg_search` extension on Neon for full-text search. By leveraging BM25 scoring and inverted indexes, `pg_search` provides powerful search capabilities directly within your Postgres database, eliminating the need for external search engines and ensuring real-time, ACID-compliant search functionality. While this guide provides a comprehensive introduction to `pg_search` on Neon, it is not exhaustive. We haven't covered topics like: - **Advanced tokenization and language handling:** Exploring specialized [tokenizers](https://docs.paradedb.com/documentation/indexing/tokenizers#tokenizers) and language-specific features. - **The full range of query types:** Exploring the full range of query functions like `more_like_this`, `regex_phrase`, and compound queries for complex search needs. - **Leveraging fast fields:** Optimizing performance with [fast fields](https://docs.paradedb.com/documentation/indexing/fast_fields#fast-fields) for aggregations, filtering, and sorting, and understanding their configuration. - **Query-time boosting:** Fine-tuning search relevance by applying [boosts](https://docs.paradedb.com/documentation/advanced/compound/boost#boost) to specific fields or terms within your queries. For a deeper dive into these and other advanced features, please refer to the official [ParadeDB documentation](https://docs.paradedb.com/welcome/introduction). ## Resources - [ParadeDB Documentation](https://docs.paradedb.com/welcome/introduction) - [Stemming in ParadeDB](https://docs.paradedb.com/documentation/indexing/token_filters#stemmer) - [BM25 Algorithm](https://en.wikipedia.org/wiki/Okapi_BM25) - [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) --- # Source: https://neon.com/llms/extensions-pg_stat_statements.txt # The pg_stat_statements extension > The document details the pg_stat_statements extension for Neon, which enables users to track and analyze SQL query performance by collecting execution statistics. ## Source - [The pg_stat_statements extension HTML](https://neon.com/docs/extensions/pg_stat_statements): The original HTML version of this documentation The `pg_stat_statements` extension provides a detailed statistical view of SQL statement execution within a Postgres database. It tracks information such as execution counts, total and average execution times, and more, helping database administrators and developers analyze and optimize SQL query performance. This guide covers: - [Enabling pg_stat_statements](https://neon.com/docs/extensions/pg_stat_statements#enable-the-pgstatstatements-extension) - [Usage examples](https://neon.com/docs/extensions/pg_stat_statements#usage-examples) - [Resetting statistics](https://neon.com/docs/extensions/pg_stat_statements#reset-statistics) **Note**: `pg_stat_statements` is an open-source extension for Postgres that can be installed on any Neon project using the instructions below. ### Version availability The version of `pg_stat_statements` available on Neon depends on the version of Postgres you select for your Neon project. For supported extension versions, see [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions). ### Data persistence In Neon, statistics collected by the `pg_stat_statements` extension are not retained when your Neon compute (where Postgres runs) is suspended or restarted. For example, if your compute scales down to zero due to inactivity, any existing statistics are lost. New statistics will be gathered once your compute restarts. For more details about the lifecycle of a Neon compute, see [Compute lifecycle](https://neon.com/docs/conceptual-guides/compute-lifecycle/). For information about configuring Neon's scale to zero behavior, see [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero). ## Enable the `pg_stat_statements` extension The extension is installed by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION IF NOT EXISTS pg_stat_statements; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Usage examples This section provides `pg_stat_statements` usage examples. ### Query the pg_stat_statements view The main interface is the `pg_stat_statements` view, which contains one row per distinct database query, showing various statistics. ```sql SELECT * FROM pg_stat_statements LIMIT 10; ``` The view contains details like those shown below: ``` | userid | dbid | queryid | query | calls | |--------|-------|-----------------------|-----------------------|-------| | 16391 | 16384 | -9047282044438606287 | SELECT * FROM users; | 10 | ``` For a complete list of `pg_stat_statements` columns and descriptions, see [The pg_stat_statements View](https://www.postgresql.org/docs/current/pgstatstatements.html#PGSTATSTATEMENTS-PG-STAT-STATEMENTS). Let's explore some example usage patterns. ### Find the most frequently executed queries The most frequently run queries are often critical paths and optimization candidates. This query retrieves details about the most frequently executed queries, ordered by the number of calls. Only the top 10 rows are returned (`LIMIT 10`): ```sql SELECT userid, query, calls, (total_exec_time / 1000 / 60) as total_min, mean_exec_time as avg_ms FROM pg_stat_statements ORDER BY 3 DESC LIMIT 10; ``` ### Monitor slow queries A high average runtime can indicate an inefficient query. The query below uses the `query`, `mean_exec_time` (average execution time per call), and `calls` columns. The condition `WHERE mean_exec_time > 1` filters out queries with an average execution time greater than 1 unit (you may adjust this threshold as needed). ```sql SELECT query, mean_exec_time, calls FROM pg_stat_statements WHERE mean_exec_time > 1 ORDER BY mean_exec_time DESC; ``` This query returns the following results: ``` | Query | Mean Time | Calls | |-----------------------------------------------|-----------|-------| | SELECT p.*, c.name AS category FROM products | 250.60ms | 723 | ``` This query retrieves the top 10 queries with the highest average execution time, focusing on queries run more than 500 times, for the current user. ```sql WITH statements AS ( SELECT * FROM pg_stat_statements pss JOIN pg_roles pr ON (pss.userid = pr.oid) WHERE pr.rolname = current_user ) SELECT calls, mean_exec_time, query FROM statements WHERE calls > 500 AND shared_blks_hit > 0 ORDER BY mean_exec_time DESC LIMIT 10; ``` This query returns the 10 longest-running queries for the current user, focusing on those executed over 500 times and with some cache usage. It orders queries by frequency and cache efficiency to highlight potential areas for optimization. ```sql WITH statements AS ( SELECT * FROM pg_stat_statements pss JOIN pg_roles pr ON (pss.userid = pr.oid) WHERE pr.rolname = current_user ) SELECT calls, shared_blks_hit, shared_blks_read, shared_blks_hit / (shared_blks_hit + shared_blks_read)::NUMERIC * 100 AS hit_cache_ratio, query FROM statements WHERE calls > 500 AND shared_blks_hit > 0 ORDER BY calls DESC, hit_cache_ratio ASC LIMIT 10; ``` This query retrieves the top 10 longest-running queries (in terms of mean execution time), focusing on queries executed more than 500 times, for the current user. ```sql WITH statements AS ( SELECT * FROM pg_stat_statements pss JOIN pg_roles pr ON (userid = oid) WHERE rolname = current_user ) SELECT calls, min_exec_time, max_exec_time, mean_exec_time, stddev_exec_time, (stddev_exec_time / mean_exec_time) AS coeff_of_variance, query FROM statements WHERE calls > 500 AND shared_blks_hit > 0 ORDER BY mean_exec_time DESC ``` ### Find queries that return many rows To identify queries that return a lot of rows, you can select the `query` and `rows` columns, representing the SQL statement and the number of rows returned by each statement, respectively. ```sql SELECT query, rows FROM pg_stat_statements ORDER BY rows DESC LIMIT 10; ``` This query returns results similar to the following: ``` | Query | Rows | |---------------------------------------------------|---------| | SELECT * FROM products; | 112,394 | | SELECT * FROM users; | 98,723 | | SELECT p.*, c.name AS category FROM products | 23,984 | ``` ### Find the most time-consuming queries The following query returns details about the most time-consuming queries, ordered by execution time. ```sql SELECT userid, query, calls, total_exec_time, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10; ``` ## Reset statistics When executed, the `pg_stat_statements_reset()` function resets the accumulated statistical data, such as execution times and counts for SQL statements, to zero. It's particularly useful in scenarios where you want to start fresh with collecting performance statistics. **Note**: In Neon, only [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) roles have the privilege required to execute this function. The default role created with a Neon project and roles created in the Neon Console, CLI, and API are granted membership in the `neon_superuser` role. ```sql SELECT pg_stat_statements_reset(); ``` ## Resources - [PostgreSQL documentation for pg_stat_statements](https://www.postgresql.org/docs/current/pgstatstatements.html) --- # Source: https://neon.com/llms/extensions-pg_tiktoken.txt # The pg_tiktoken extension > The document details the pg_tiktoken extension for Neon, which integrates OpenAI's tiktoken library into PostgreSQL to tokenize text data efficiently within the database. ## Source - [The pg_tiktoken extension HTML](https://neon.com/docs/extensions/pg_tiktoken): The original HTML version of this documentation The `pg_tiktoken` extension enables fast and efficient tokenization of data in your Postgres database using OpenAI's [tiktoken](https://github.com/openai/tiktoken) library. This topic provides guidance on installing the extension, utilizing its features for tokenization and token management, and integrating the extension with ChatGPT models. ## What is a token? Language models process text in units called tokens. A token can be as short as a single character or as long as a complete word, such as "a" or "apple." In some languages, tokens may comprise less than a single character or even extend beyond a single word. For example, consider the sentence "Neon is serverless Postgres." It can be divided into seven tokens: ["Ne", "on", "is", "server", "less", "Post", "gres"]. ## `pg_tiktoken` functions The `pg_tiktoken` offers two functions: - `tiktoken_encode`: Accepts text inputs and returns tokenized output, allowing you to seamlessly tokenize your text data. - `tiktoken_count`: Counts the number of tokens in a given text. This feature helps you adhere to text length limits, such as those set by OpenAI's language models. ## Install the `pg_tiktoken` extension You can install the `pg_tiktoken` extension by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION pg_tiktoken ``` For information about using the Neon **SQL Editor**, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Use the `tiktoken_encode` function The `tiktoken_encode` function tokenizes text input and returns a tokenized output. The function accepts encoding names and OpenAI model names as the first argument and the text you want to tokenize as the second argument, as shown: ```sql SELECT tiktoken_encode('text-davinci-003', 'The universe is a vast and captivating mystery, waiting to be explored and understood.'); tiktoken_encode -------------------------------------------------------------------------------- {464,6881,318,257,5909,290,3144,39438,10715,11,4953,284,307,18782,290,7247,13} (1 row) ``` The function tokenizes text using the [Byte Pair Encoding (BPE)](https://en.wikipedia.org/wiki/Byte_pair_encoding) algorithm. ## Use the `tiktoken_count` function The `tiktoken_count` function counts the number of tokens in a text. The function accepts encoding names and OpenAI model names as the first argument and text as the second argument, as shown: ```sql neondb=> SELECT tiktoken_count('text-davinci-003', 'The universe is a vast and captivating mystery, waiting to be explored and understood.'); tiktoken_count ---------------- 17 (1 row) ``` ## Supported models The `tiktoken_count` and `tiktoken_encode` functions accept both encoding and OpenAI model names as the first argument: ```text tiktoken_count(,) ``` The following models are supported: | Encoding name | OpenAI model | | :------------------ | :-------------------------------------------------------------------- | | cl100k_base | ChatGPT models, text-embedding-ada-002 | | p50k_base | Code models, text-davinci-002, text-davinci-003 | | p50k_edit | Use for edit models like text-davinci-edit-001, code-davinci-edit-001 | | r50k_base (or gpt2) | GPT-3 models like davinci | ## Integrate `pg_tiktoken` with ChatGPT models The `pg_tiktoken` extension allows you to store chat message history in a Postgres database and retrieve messages that comply with OpenAI's model limitations. For example, consider the `message` table below: ```sql CREATE TABLE message ( role VARCHAR(50) NOT NULL, -- equals to 'system', 'user' or 'assistant' content TEXT NOT NULL, created TIMESTAMP NOT NULL DEFAULT NOW(), n_tokens INTEGER -- number of content tokens ); ``` The [gpt-3.5-turbo chat model](https://platform.openai.com/docs/guides/chat/introduction) requires specific parameters: ```json { "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Who won the world series in 2020?" }, { "role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020." } ] } ``` The `messages` parameter is an array of message objects, with each object containing two pieces of information: The `role` of the message sender (either `system`, `user`, or `assistant`) and the actual message `content`. Conversations can be brief, with just one message, or span multiple pages as long as the combined message tokens do not exceed the 4096-token limit. To insert `role`, `content`, and the number of tokens into the database, use the following query: ```sql INSERT INTO message (role, content, n_tokens) VALUES ('user', 'Hello, how are you?', tiktoken_count('text-davinci-003','Hello, how are you?')); ``` ## Manage text tokens When a conversation contains more tokens than a model can process (e.g., over 4096 tokens for `gpt-3.5-turbo`), you will need to truncate the text to fit within the model's limit. Additionally, lengthy conversations may result in incomplete replies. For example, if a `gpt-3.5-turbo` conversation spans 4090 tokens, the response will be limited to just six tokens. The following query retrieves messages up to your desired token limits: ```sql WITH cte AS ( SELECT role, content, created, n_tokens, SUM(tokens) OVER (ORDER BY created DESC) AS cumulative_sum FROM message ) SELECT role, content, created, n_tokens, cumulative_sum FROM cte WHERE cumulative_sum <= ; ``` `` represents the conversation history you want to keep for chat completion, following this formula: ```text MAX_HISTORY_TOKENS = MODEL_MAX_TOKENS – NUM_SYSTEM_TOKENS – NUM_COMPLETION_TOKENS ``` For example, assume the desired completion length is 100 tokens (`NUM_COMPLETION_TOKENS=90`). ```text MAX_HISTORY_TOKENS = 4096 – 6 – 90 = 4000 ``` ```json { "model": "gpt-3.5-turbo", // MODEL_MAX_TOKENS = 4096 "messages": [ {"role": "system", "content": "You are a helpful assistant."}, // NUM_SYSTEM_TOKENS = 6 {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": ...} . . . {"role": "user", "content": "Great! Have a great day."} // MAX_HISTORY_TOKENS = 4000 ] } ``` ## Conclusion In conclusion, the `pg_tiktoken` extension is a valuable tool for tokenizing text data and managing tokens within Postgres databases. By leveraging OpenAI's tiktoken library, it simplifies the process of tokenization and working with token limits, enabling you to integrate more easily with OpenAI's language models. As you explore the capabilities of the `pg_tiktoken extension`, we encourage you to provide feedback and suggest features you'd like to see added in future updates. We look forward to seeing the innovative natural language processing applications you create using `pg_tiktoken`. ## Resources - [Open AI tiktoken source code on GitHub](https://github.com/openai/tiktoken) - [pg_tiktoken source code on GitHub](https://github.com/kelvich/pg_tiktoken) --- # Source: https://neon.com/llms/extensions-pg_trgm.txt # The pg_trgm extension > The document details the installation and usage of the pg_trgm extension in Neon, enabling users to perform text similarity searches and index text data efficiently within their databases. ## Source - [The pg_trgm extension HTML](https://neon.com/docs/extensions/pg_trgm): The original HTML version of this documentation The `pg_trgm` extension enhances Postgres' ability to perform text searches by using trigram matching. Trigrams are groups of three consecutive characters taken from a string. By breaking down text into trigrams, Postgres can perform more efficient and flexible searches, such as similarity and proximity searches. This extension is particularly useful for applications requiring fuzzy string matching or searching within large bodies of text. In this guide, we'll explore the `pg_trgm` extension, covering how to enable it, use it for text searches, and optimize queries. This extension has applications in data retrieval, text analysis, and anywhere robust text search capabilities are needed. **Note**: The `pg_trgm` extension is open-source and can be installed on any Postgres setup. Detailed information about the extension is available in the [PostgreSQL Documentation](https://www.postgresql.org/docs/current/pgtrgm.html). **Version availability** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date information. ## Enable the `pg_trgm` extension Activate `pg_trgm` by running the `CREATE EXTENSION` statement in your Postgres client: ```sql CREATE EXTENSION IF NOT EXISTS pg_trgm; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Example usage Let's say you're developing a database of books and you want to find books with similar titles. We first create a test table and insert some sample data, using the query below. ```sql CREATE TABLE books ( id SERIAL PRIMARY KEY, title TEXT ); INSERT INTO books (title) VALUES ('The Great Gatsby'), ('The Grapes of Wrath'), ('Great Expectations'), ('War and Peace'), ('Pride and Prejudice'), ('To Kill a Mockingbird'), ('1984'); ``` **Basic string matching** The `pg_trgm` extension can help you do fuzzy matches on strings. For example, the query below looks for titles that are similar to the misspelled phrase "Grate Expectation". The `%` operator, provided by `pg_trgm`, measures similarity between two strings based on trigrams, and returns results if the similarity is above a certain threshold. ```sql SELECT * FROM books WHERE title % 'Grate Expectation'; ``` This query returns the following: ```text | id | title | |----|---------------------| | 1 | Great Expectations | ``` The similarity threshold can be adjusted by setting the `pg_trgm.similarity_threshold` parameter (default value is `0.3`). ## Trigrams ### Counting trigrams The `pg_trgm` module makes these assumptions about how to count trigrams in a text string: - Only alphanumeric characters are considered. - The string is lowercased before counting trigrams. - Each word is assumed to be prefixed with two spaces and suffixed with one space. - The set of trigrams output is deduplicated. We can use the `show_trgm` function to see how `pg_trgm` counts trigrams in a string. Here is an example: ```sql SELECT show_trgm('War and Peace'); -- {" a"," p"," w"," an"," pe"," wa",ace,and,"ar ","ce ",eac,"nd ",pea,war} ``` ### Computing similarity Given the set of trigrams for two strings `A` and `B`, `pg_trgm` computes the similarity score as the size of the intersection of the two sets divided by the size of the union of the two sets. Here is an example. ```sql SELECT show_trgm('War'), show_trgm('Bar'), similarity('War', 'Bar'); ``` This query returns the following: ```text | show_trgm | show_trgm | similarity | |------------------------|------------------------|------------| | {" w"," wa","ar ",war} | {" b"," ba","ar ",bar} | 0.14285715 | ``` There are 7 distinct trigrams across the two input strings and 1 trigram in common. So the similarity score comes out to be 1/7 (0.14285715). ## Advanced text searching `pg_trgm` offers powerful tools for more complex text search requirements. **Proximity search** The `similarity` function provided by `pg_trgm`, returns a number between 0 and 1, representing how similar the two strings are. By filtering on the similarity score, you can search for strings that are within the specified threshold. ```sql SELECT title FROM books WHERE SIMILARITY(title, 'War and') > 0.3; ``` This query returns the following: ```text | title | |---------------| | War and Peace | ``` **Substring matching** `pg_trgm` also provides functionality to match the input text value against substrings within the target string. The query below illustrates this: ```sql SELECT word_similarity('apple', 'green apples'), strict_word_similarity('apple', 'green apples'); ``` This query returns the following: ```text | word_similarity | strict_word_similarity | |-----------------|------------------------| | 0.8333333 | 0.625 | ``` The `word_similarity` function returns the maximum similarity score between the input string and any substring of the target string. The similarity score is still computed using trigrams. In this example, the first string `apple` matches with the substring `apple` in the target. In contrast, the `strict_word_similarity` function only considers a subset of substrings from the target, namely only sequences of full words in the target string. That is, the first string `apple` matches the substring `apples` in the target, hence the lower score. **Distance scores** There are operators to calculate the `distance` between two strings, i.e., one minus the similarity score. ```sql SELECT similarity('Hello', 'Halo') AS similarity, 'Hello' <-> 'Halo' AS distance; ``` This query returns the following: ```text | similarity | distance | |------------|-----------| | 0.22222222 | 0.7777778 | ``` Similarly, there are operators to compute the distance based on the `word_similarity` and `strict_word_similarity` functions. ## Performance considerations While `pg_trgm` enhances text search capabilities, computing similarity can get expensive when matching against a large set of strings. Here are a couple of tips to improve performance: - **Indexing**: Using `pg_trgm`, you can create a `GiST` or `GIN` index to speed up similarity search queries. This also helps regular expression-based searches, such as with `LIKE` and `ILIKE` operators. ```sql CREATE INDEX trgm_idx_gist ON books USING GIST (title gist_trgm_ops); -- or CREATE INDEX trgm_idx_gin ON books USING GIN (title gin_trgm_ops); ``` - **Limiting results**: Use `LIMIT` to restrict the number of rows returned for more efficient querying. ## Conclusion `pg_trgm` offers a versatile set of tools for text processing and searching in Postgres. We went over the basics of the extension, including how to enable it and how to use it for fuzzy string matching and proximity searches. ## Resources - [PostgreSQL pg_trgm documentation](https://www.postgresql.org/docs/current/pgtrgm.html) - [PostgreSQL Text Search](https://www.postgresql.org/docs/current/textsearch.html) --- # Source: https://neon.com/llms/extensions-pg_uuidv7.txt # The pg_uuidv7 extension > The document details the pg_uuidv7 extension for Neon, which facilitates the generation and management of UUID version 7 identifiers within the database environment. ## Source - [The pg_uuidv7 extension HTML](https://neon.com/docs/extensions/pg_uuidv7): The original HTML version of this documentation The `pg_uuidv7` extension allows you to generate and work with version 7 Universally Unique Identifiers (UUIDs) in Postgres. UUIDv7 is a newer UUID format designed to be time-ordered and sortable, which offers significant benefits for database performance, especially when used as primary keys or in time-series data. Unlike traditional random UUIDs (like Version 4), UUIDv7 embeds a Unix timestamp in its leading bits, followed by random bits. This structure ensures that newly generated UUIDs are roughly sequential, which is highly beneficial for database indexing (e.g., B-trees) and can improve data locality, leading to faster queries and insertions. ## Enable the `pg_uuidv7` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pg_uuidv7; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Core functions The `pg_uuidv7` extension provides a concise set of functions for generating and manipulating version 7 UUIDs. ### `uuid_generate_v7()` This is the primary function for generating new version 7 UUIDs. It creates a UUID incorporating the current Unix timestamp (with millisecond precision) in its most significant bits, followed by randomly generated bits for the remainder. ```sql SELECT uuid_generate_v7(); -- 0196ce37-0758-736d-a33b-ad3f017359e3 (example output) ``` ### `uuid_v7_to_timestamptz(uuid_v7 UUID)` This function extracts the embedded timestamp from a version 7 UUID and returns it as a `TIMESTAMPTZ` (timestamp with time zone) value. ```sql SELECT uuid_v7_to_timestamptz('0196ce37-0758-736d-a33b-ad3f017359e3'); -- 2025-05-14 09:53:55.032+00 ``` ### `uuid_timestamptz_to_v7(ts TIMESTAMPTZ, zero_random_bits BOOLEAN DEFAULT false)` This function converts a given `TIMESTAMPTZ` value into a version 7 UUID. It takes two arguments: 1. `ts TIMESTAMPTZ`: The timestamp to embed in the UUID. 2. `zero_random_bits BOOLEAN` (optional, defaults to `false`): - If `false` (default), the random bits portion of the UUID will be filled with new random data. This is useful for creating a UUID tied to a specific past or future time but still unique. - If `true`, the random bits portion of the UUID will be set to all zeros. This is particularly useful for creating boundary UUIDs for time-range queries (e.g., the earliest possible UUID for a given timestamp). #### Generating a UUID for a specific timestamp with random bits ```sql SELECT uuid_timestamptz_to_v7('2025-05-14 10:53:55.032+00'); ``` Example output (random part will vary): ```text uuid_timestamptz_to_v7 -------------------------------------- 0196ce6d-f5d8-7a89-8e7a-06b371fc5d70 (1 row) ``` #### Generating a boundary UUID for a specific timestamp (random bits zeroed) ```sql SELECT uuid_timestamptz_to_v7('2025-05-14 10:53:55.032+00', true); ``` Example output (random part will be fixed): ```text uuid_timestamptz_to_v7 -------------------------------------- 0196ce6d-f5d8-7000-8000-000000000000 (1 row) ``` ## Key advantages of UUIDv7 Using version 7 UUIDs in your database schema can provide several advantages over traditional UUIDs, especially in scenarios where time-based ordering is important. Here are some key benefits: 1. **Improved Indexing performance:** Because UUIDv7s are time-ordered, new entries are typically inserted towards the end of an index (e.g., a B-tree index on a UUIDv7 primary key). This leads to better data locality, reduced page splits, and less index fragmentation compared to random UUIDs (like v4). This can significantly boost insert performance and make range scans more efficient. 2. **Natural sortability:** UUIDv7s can be sorted chronologically by their value, which is useful for ordering records by creation time without needing a separate timestamp column for this purpose. 3. **Distributed systems friendliness:** Like all UUIDs, v7 can be generated independently across multiple nodes without coordination, ensuring global uniqueness. The time-ordered property adds benefits for distributed databases that might later need to merge or sort data by generation time. ## Example usage Let's explore some common use cases for `pg_uuidv7`. ### Using UUIDv7 as a primary key UUIDv7 is an excellent candidate for primary keys, especially for tables where data is often queried or inserted based on time. ```sql CREATE TABLE events ( event_id UUID PRIMARY KEY DEFAULT uuid_generate_v7(), event_type TEXT NOT NULL, event_data JSONB ); INSERT INTO events (event_type, event_data) VALUES ('user_login', '{"user_id": 101, "ip": "192.168.1.10"}'), ('page_view', '{"user_id": 101, "url": "/products/awesome-widget"}'), ('purchase', '{"user_id": 205, "item_id": "XYZ123", "amount": 99.99}') RETURNING event_id, event_type; ``` Example output: ```text event_id | event_type --------------------------------------+------------ 0196e801-a0d6-7af7-a308-13057189ef3f | user_login 0196e801-a0ec-7cf7-a4f2-2a4a73c58688 | page_view 0196e801-a0ed-7fdb-9858-dcd601cadc26 | purchase (3 rows) ``` Notice how the `event_id` values are largely sequential, reflecting their insertion order. ### Time-range queries The ability to convert timestamps to UUIDv7s (especially with zeroed random bits) is very useful for performing efficient time-range queries directly on the UUIDv7 primary key. Suppose we want to find all events that occurred between May 14, 2025, and May 24, 2025. We can use the `uuid_timestamptz_to_v7()` function to create boundary UUIDs for our query: ```sql SELECT * FROM events WHERE event_id >= uuid_timestamptz_to_v7('2025-05-14', true) -- Start of May 14th AND event_id < uuid_timestamptz_to_v7('2025-05-24', true); -- Start of May 24th ``` This query can efficiently use an index on `event_id` to find matching records. ## Comparison with UUID4 UUIDv4 is purely random. While excellent for uniqueness, its randomness leads to poor index locality and fragmentation when used as a primary key for time-sensitive data. UUIDv7 directly addresses this by being time-ordered. ## Conclusion The `pg_uuidv7` extension provides a robust and efficient way to work with version 7 UUIDs in Postgres. By embedding a timestamp, UUIDv7s offer the global uniqueness of traditional UUIDs while also being chronologically sortable. This makes them an excellent choice for primary keys and indexed columns in applications where time-ordering and query performance on time-based data are critical. ## Resources - [fboulnois/pg_uuidv7 GitHub repository](https://github.com/fboulnois/pg_uuidv7) - [UUID version 7](https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#name-uuid-version-7). - [uuid-ossp](https://neon.com/docs/extensions/uuid-ossp) --- # Source: https://neon.com/llms/extensions-pgcrypto.txt # The pgcrypto extension > The document details the pgcrypto extension for Neon, enabling users to perform cryptographic functions such as encryption, decryption, and hashing within PostgreSQL databases. ## Source - [The pgcrypto extension HTML](https://neon.com/docs/extensions/pgcrypto): The original HTML version of this documentation The `pgcrypto` extension offers a range of cryptographic functions within Postgres. These functions enable encryption, decryption, and hashing operations through standard SQL queries. This can reduce reliance on external cryptographic tools for data security tasks in a Postgres environment. In this guide, you'll learn how to enable the `pgcrypto` extension on Neon, use its core cryptographic functions, explore practical applications for data security, and follow best practices for managing security considerations. ## Enable the `pgcrypto` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pgcrypto; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Cryptographic functions The `pgcrypto` extension provides a wide range of cryptographic functions that can be used directly within SQL queries. These functions can be broadly categorized into the following groups: ### General hashing functions `pgcrypto` provides functions for generating one-way hashes, crucial for verifying data integrity and securely comparing data without revealing the original content. - **`digest(data, type)`**: The `digest` function computes a binary hash of the input `data` using the algorithm specified by `type`. This function supports a wide range of algorithms, including [`md5`](https://en.wikipedia.org/wiki/MD5), [`sha1`](https://en.wikipedia.org/wiki/SHA1), and the [SHA-2](https://en.wikipedia.org/wiki/SHA2) family (`sha224`, `sha256`, `sha384`, `sha512`), as well as any other digest algorithm supported by the underlying OpenSSL library. ```sql SELECT digest('Sensitive Information', 'sha256'); -- \x7daa83aa2e4618c8de40eb6642dbde3bceead971c322c66ed47676897a1b31c1 (binary output) ``` - **`hmac(data, key, type)`**: The `hmac` function calculates a keyed hash, also known as a Hash-based Message Authentication Code. It incorporates a secret `key` into the hashing process, ensuring that only parties with knowledge of the key can verify the hash. This provides both data integrity and authenticity. ```sql SELECT hmac('Data to Authenticate', 'shared_secret_key', 'sha256'); -- \x261415730795bccaedb60061af12bf8fdb0833b4bad7735214dc78789e233257 (binary output) ``` ### Password hashing functions `pgcrypto` includes specialized functions designed for securely hashing passwords, essential for protecting user credentials. - **`crypt(password text, salt text)`**: The `crypt` function implements a crypt(3)-style hashing algorithm, specifically tailored for password security. It takes the `password` to be hashed and a `salt` value as input. ```sql SELECT crypt('user_password', gen_salt('md5')); -- $1$bPYjhtip$NT.UC/6xLeoj8leDs7Neh0 (example hashed password output) ``` - **`gen_salt(type text [, iter_count integer ])`**: The `gen_salt` function generates new, random salt values for use with the `crypt()` function. The `type` parameter specifies the hashing algorithm (e.g., `bf` for Blowfish, `md5`, `xdes`, `des`). For algorithms like [Blowfish]() and [Extended DES](https://en.wikipedia.org/wiki/Data_Encryption_Standard) (`xdes`), you can specify `iter_count` to control the number of iterations, increasing the computational cost and security. ```sql SELECT gen_salt('bf'); -- Generate a Blowfish salt -- $2a$06$KlIoNEoix2oKbLwMimhQpu (example output) SELECT gen_salt('bf', 10); -- Generate Blowfish salt with 2^10 iterations -- $2a$10$nnHUvyZckh1VBh5zWNEFKO (example output) ``` ### PGP encryption functions For general-purpose encryption needs, `pgcrypto` implements the encryption part of the OpenPGP standard, providing functions for both symmetric-key and public-key encryption. - **`pgp_sym_encrypt(data, psw [, options ])`**: The `pgp_sym_encrypt` function encrypts `data` using symmetric-key encryption with a provided password `psw`. Symmetric encryption uses the same key for both encryption and decryption. ```sql SELECT pgp_sym_encrypt('Confidential Data', 'encryption_password'); -- \xc30d040703029c3eba2b3565a3937bd2420120ae6792f663bd35977d21a5a8e13de9a8a8e5a9212ef06f8b056dcc31e0b48096915ddac66f14ab403ea671a8b4c740a198d32bcc5b804a30ef7e9aeacd7c1246 (binary output) ``` - **`pgp_sym_decrypt(msg, psw [, options ])`**: The `pgp_sym_decrypt` function decrypts a message `msg` that was encrypted using symmetric-key encryption with the password `psw`. ```sql SELECT pgp_sym_decrypt(encrypted_message, 'encryption_password'); -- SELECT pgp_sym_decrypt('\xc30d040703029c3eba2b3565a3937bd2420120ae6792f663bd35977d21a5a8e13de9a8a8e5a9212ef06f8b056dcc31e0b48096915ddac66f14ab403ea671a8b4c740a198d32bcc5b804a30ef7e9aeacd7c1246', 'encryption_password'); -- Confidential Data (plaintext output) ``` - **`pgp_pub_encrypt(data, key [, options ])`**: The `pgp_pub_encrypt` function encrypts `data` using public-key encryption with a provided public `key`. Public-key encryption uses separate keys for encryption (public key) and decryption (private key). ```sql SELECT pgp_pub_encrypt('Secret Message', 'public_key_here'); -- encrypted_message (binary output) ``` - **`pgp_pub_decrypt(msg, key [, psw [, options ]])`**: The `pgp_pub_decrypt` function decrypts a message `msg` that was encrypted using public-key encryption. It requires the private `key` corresponding to the public key used for encryption. If the private key is password-protected, the `psw` is also required. ```sql SELECT pgp_pub_decrypt(encrypted_message, 'private_key_here', 'private_key_password'); -- Secret Message (plaintext output) ``` ### Random data functions `pgcrypto` provides functions for generating cryptographically secure random data, essential for various security operations. - **`gen_random_bytes(count integer)`**: The `gen_random_bytes` function generates a specified number of cryptographically strong random bytes. These bytes can be used as salts, initialization vectors, or for other security-sensitive purposes. ```sql SELECT gen_random_bytes(16); -- Generate 16 random bytes -- \xc9259a991537e3d730db78133f208e94 (example binary output) ``` - **`gen_random_uuid()`**: The `gen_random_uuid()` function generates a version 4 universally unique identifier (UUID) based on random numbers. This is functionally equivalent to PostgreSQL's built-in [`gen_random_uuid()`](https://neon.com/postgresql/postgresql-tutorial/postgresql-uuid#generating-uuid-values). ```sql SELECT gen_random_uuid(); -- 90d18ac7-4af7-458d-8f7a-a7211b5d3eee (example output) ``` ## Practical applications `pgcrypto` offers a wide range of practical applications for enhancing data security within your Postgres environment: - **Secure password storage**: Use `crypt()` and `gen_salt()` to securely store user passwords as hashes, protecting them from exposure in case of a data breach. - **Data encryption at rest (Column-Level)**: Employ `pgp_sym_encrypt()` or `pgp_pub_encrypt()` to encrypt sensitive data columns within your tables, ensuring data confidentiality even if the database is compromised. - **Data anonymization**: Leverage encryption functions to pseudonymize or anonymize sensitive data for non-production environments or for compliance purposes. ## Example: Secure password storage Let's walk through a practical example of using `pgcrypto` to securely store and verify user passwords in a Postgres database. 1. Hash and salt a password using `crypt()` and `gen_salt()` Suppose you want to hash the password `"mypassword"`. You'll use `gen_salt()` to generate a salt and `crypt()` to hash the password with the salt. For this example, we'll use the Blowfish algorithm with 4 rounds (iterations): ```sql SELECT crypt('mypassword', gen_salt('bf', 4)); -- $2a$04$vVVrQ777SjxyQKuFp7z6ue (example hashed password output) ``` The output is the hashed password, which includes the salt and algorithm identifier. **You should store this entire hashed password string in your database, not the original password.** 2. Store the hashed password: Create a table to store usernames and their hashed passwords. ```sql CREATE TABLE users ( username VARCHAR(50) PRIMARY KEY, password_hash TEXT NOT NULL ); INSERT INTO users (username, password_hash) VALUES ('testuser', '$2a$04$vVVrQ777SjxyQKuFp7z6ue'); -- Replace with the hash from the previous step ``` 3. Verify a password during login: When a user attempts to log in, you'll receive the password they entered (e.g., `"mypassword"` again). To verify it, you'll use `crypt()` again, passing the entered password and the stored `password_hash` from the database. ```sql SELECT password_hash = crypt('mypassword', password_hash) AS password_match FROM users WHERE username = 'testuser'; -- password_match -- -------------- -- t -- (1 row) ``` If the passwords match, `crypt()` will return the same stored hash (or a hash that compares as equal), and the query will return `t` (true). 4. Incorrect password attempt: If the user enters an incorrect password (e.g., `"wrongpassword"`), the verification will fail: ```sql SELECT password_hash = crypt('wrongpassword', password_hash) AS password_match FROM users WHERE username = 'testuser'; -- password_match -- -------------- -- f -- (1 row) ``` In this case, the query returns `f` (false), indicating an incorrect password. By following these steps, you can securely store and verify user passwords using `pgcrypto` in your Postgres database. ## Performance Implications While `pgcrypto` provides robust security features, it's important to consider the performance implications of cryptographic operations: - **Computational overhead**: Encryption, decryption, and hashing operations inherently require computational resources. The extent of the overhead depends on the chosen algorithms, data size, and frequency of operations. - **Password hashing**: Password hashing algorithms, like those used in `crypt()`, are intentionally designed to be slow to resist brute-force attacks. This can introduce a slight delay during user authentication processes. ## Security Considerations When using `pgcrypto`, it's crucial to adhere to security best practices: - **Key management**: Securely manage encryption keys. Store them outside the database if possible, and implement key rotation policies. Never store keys in plaintext within the database as that would defeat the purpose of encryption. - **Algorithm selection**: Choose appropriate cryptographic algorithms based on your security requirements. For password hashing, use strong algorithms like Blowfish with sufficient iteration counts. For data encryption, select robust and widely-vetted algorithms like AES. ## Conclusion The `pgcrypto` extension is a powerful and versatile tool for enhancing data security in Postgres. By providing a rich set of cryptographic functions, it enables you to implement robust security measures directly within your database environment. From secure password storage to data encryption and hashing, `pgcrypto` offers a wide range of applications to protect your data. ## Resources - [`pgcrypto` extension in the PostgreSQL Documentation](https://www.postgresql.org/docs/current/pgcrypto.html) --- # Source: https://neon.com/llms/extensions-pgrag.txt # The pgrag extension > The document details the pgrag extension for Neon, which facilitates the creation and manipulation of graph data structures within PostgreSQL databases. ## Source - [The pgrag extension HTML](https://neon.com/docs/extensions/pgrag): The original HTML version of this documentation What you will learn: - What is RAG? - What's included in a RAG pipeline? - `pgrag` functions - How to use `pgrag` Related resources: - [The pgvector extension](https://neon.com/docs/extensions/pgvector) - [YouTube: pgrag video demonstration](https://www.youtube.com/watch?v=QDNsxw_3ris&t=1356s) Source code: - [pgrag GitHub repository](https://github.com/neondatabase-labs/pgrag) The `pgrag` extension and its accompanying model extensions are designed for creating end-to-end Retrieval-Augmented Generation (RAG) pipelines without leaving your SQL client. No additional programming languages or libraries are required. With functions provided by `pgrag` and a Postgres database with `pgvector`, you can build a complete RAG pipeline via SQL. **Info** Experimental Feature: The `pgrag` extension is experimental and actively being developed. Use it with caution as functionality may change. ## What is RAG? **RAG stands for Retrieval-Augmented Generation**. It's the search for information relevant to a question that includes information alongside the question in a prompt to an AI chat model. For example, "_ChatGPT, please answer questions x using information Y_". --- ## What's included in a RAG pipeline? A RAG pipeline includes a number of steps, as illustrated in the following diagram. The steps outlined above can be organized into two main stages: 1. **Preparing and indexing the information**: 1. Load documents and extract text 2. Split documents into chunks 3. Generate embeddings for chunks 4. Store the embeddings alongside chunks 2. **Handling incoming questions**: 5. Vectorize question 6. Use question embedding to find relevant document chunks 7. Retrieve document chunks from database 8. Rerank and take only best-match chunks to answer question 9. Prompt with question + relevant document chunks to answer question 10. Generated answer --- ## What does pgrag support? With the exception of (4) storing embeddings in the database and (7) Retrieve document chunks from database, which is supported by Postgres with `pgvector`, `pgrag` supports all of the steps listed above. Specifically, `pgrag` supports: - **Text extraction and conversion** - Simple text extraction from PDF documents (using [pdf-extract](https://github.com/jrmuizel/pdf-extract)). Currently, there is no Optical Character Recognition (OCR) or support for complex layout and formatting. - Simple text extraction from `.docx` documents (using [docx-rs](https://github.com/cstkingkey/docx-rs)). - HTML conversion to Markdown (using [htmd](https://github.com/letmutex/htmd)). - **Text chunking** - Text chunking by character count (using [text-splitter](https://github.com/benbrandt/text-splitter)). - Text chunking by token count (also using [text-splitter](https://github.com/benbrandt/text-splitter)). - **Local embedding and reranking models** - Local tokenising + embedding generation with 33M parameter model [bge-small-en-v1.5](https://huggingface.co/Xenova/bge-small-en-v1.5) (using [ort](https://github.com/pykeio/ort) via [fastembed](https://github.com/Anush008/fastembed-rs)). - Local tokenising + reranking with 33M parameter model [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) (also using [ort](https://github.com/pykeio/ort) via [fastembed](https://github.com/Anush008/fastembed-rs)). **Note**: These models run locally on your Postgres server. They are packaged as separate extensions that accompany `pgrag`, because they are large (>100MB), and because we may want to add support for more models in future in the form of additional `pgrag` model extensions. - **Remote embedding and chat models** --- ## Installation **Warning**: As an experimental extension, `pgrag` may be unstable or introduce backward-incompatible changes. We recommend using it only in a separate, dedicated Neon project. To proceed with the installation, you will need to run the following command first: ```sql SET neon.allow_unstable_extensions='true'; ``` To install `pgrag` to a Neon Postgres database, run the following commands: ```sql create extension if not exists rag cascade; create extension if not exists rag_bge_small_en_v15 cascade; create extension if not exists rag_jina_reranker_v1_tiny_en cascade; ``` The first extension is the `pgrag` extension. The other two extensions are the model extensions for local tokenising, embedding generation, and reranking. The three extensions have no dependencies on each other, but all depend on `pgvector`. Specifying `cascade` ensures that `pgvector` is installed. --- ## pgrag functions This section lists the functions provided by `pgrag`. For function usage examples, refer to the [end-to-end RAG example](https://neon.com/docs/extensions/pgrag#end-to-end-rag-example) below or the [pgrag GitHub repository](https://github.com/neondatabase-labs/pgrag). - **Text extraction** These functions extract text from PDFs, Word files, and HTML. - `rag.text_from_pdf(bytea) -> text` - `rag.text_from_docx(bytea) -> text` - `rag.markdown_from_html(text) -> text` - **Splitting text into chunks** These functions split the extracted text into chunks by character count or token count. - `rag.chunks_by_character_count(text, max_chars, overlap) -> text[]` - `rag_bge_small_en_v15.chunks_by_token_count(text, max_tokens, overlap) -> text[]` - **Generating embeddings for chunks** These functions generate embeddings for chunks either directly in the extension using a small but best-in-class model on the database server or by calling out to a 3rd-party API such as OpenAI. - `rag_bge_small_en_v15.embedding_for_passage(text) -> vector(384)` - `rag.openai_text_embedding_3_small(text) -> vector(1536)` - **Generating embeddings for questions** These functions generate embeddings for the questions. - `rag_bge_small_en_v15.embedding_for_query(text) -> vector(384)` - `rag.openai_text_embedding_3_small(text) -> vector(1536)` - **Reranking** This function reranks chunks against the question using a small but best-in-class model that runs locally on your Postgres server. - `rag_jina_reranker_v1_tiny_en.rerank_distance(text, text) -> real` - **Calling out to chat models** This function makes API calls to AI chat models such as ChatGPT to generate an answer using the question and the chunks together. - `rag.openai_chat_completion(json) -> json` --- ## End-to-end RAG example **1. Create a `docs` table and ingest some PDF documents as text** ```sql drop table docs cascade; create table docs ( id int primary key generated always as identity , name text not null , fulltext text not null ); \set contents `base64 < /path/to/first.pdf` insert into docs (name, fulltext) values ('first.pdf', rag.text_from_pdf(decode(:'contents','base64'))); \set contents `base64 < /path/to/second.pdf` insert into docs (name, fulltext) values ('second.pdf', rag.text_from_pdf(decode(:'contents','base64'))); \set contents `base64 < /path/to/third.pdf` insert into docs (name, fulltext) values ('third.pdf', rag.text_from_pdf(decode(:'contents','base64')))); ``` **2. Create an `embeddings` table, chunk the text, and generate embeddings for the chunks (performed locally)** ```sql drop table embeddings; create table embeddings ( id int primary key generated always as identity , doc_id int not null references docs(id) , chunk text not null , embedding vector(384) not null ); create index on embeddings using hnsw (embedding vector_cosine_ops); with chunks as ( select id, unnest(rag_bge_small_en_v15.chunks_by_token_count(fulltext, 192, 8)) as chunk from docs ) insert into embeddings (doc_id, chunk, embedding) ( select id, chunk, rag_bge_small_en_v15.embedding_for_passage(chunk) from chunks ); ``` **3. Query the embeddings and rerank the results (performed locally)** ```sql \set query 'what is [...]? how does it work?' with ranked as ( select id, doc_id, chunk, embedding <=> rag_bge_small_en_v15.embedding_for_query(:'query') as cosine_distance from embeddings order by cosine_distance limit 10 ) select *, rag_jina_reranker_v1_tiny_en.rerank_distance(:'query', chunk) from ranked order by rerank_distance; ``` **4. Feed the query and top chunks to a remote AI chat model such as ChatGPT to complete the RAG pipeline** ````sql \set query 'what is [...]? how does it work?' with ranked as ( select id, doc_id, chunk, embedding <=> rag_bge_small_en_v15.embedding_for_query(:'query') as cosine_distance from embeddings order by cosine_distance limit 10 ), reranked as ( select *, rag_jina_reranker_v1_tiny_en.rerank_distance(:'query', chunk) from ranked order by rerank_distance limit 5 ) select rag.openai_chat_completion(json_object( 'model': 'gpt-4o-mini', 'messages': json_array( json_object( 'role': 'system', 'content': E'The user is [...].\n\nTry to answer the user''s QUESTION using only the provided CONTEXT.\n\nThe CONTEXT represents extracts from [...] which have been selected as most relevant to this question.\n\nIf the context is not relevant or complete enough to confidently answer the question, your best response is: "I''m afraid I don''t have the information to answer that question".' ), json_object( 'role': 'user', 'content': E'# CONTEXT\n\n```\n' || string_agg(chunk, E'\n\n') || E'\n```\n\n# QUESTION\n\n```\n' || :'query' || E'```' ) ) )) -> 'choices' -> 0 -> 'message' -> 'content' as answer from reranked; ```` --- # Source: https://neon.com/llms/extensions-pgrowlocks.txt # The pgrowlocks extension > The document details the pgrowlocks extension for Neon, which allows users to monitor row-level locks in PostgreSQL databases, aiding in the analysis and management of database concurrency issues. ## Source - [The pgrowlocks extension HTML](https://neon.com/docs/extensions/pgrowlocks): The original HTML version of this documentation The `pgrowlocks` extension provides a function to inspect active row-level locks for a specified table within your Postgres database. This is invaluable for diagnosing lock contention issues, understanding which specific rows are currently locked, and identifying the transactions or processes holding these locks. By offering a detailed, real-time view of row locks, `pgrowlocks` helps developers and database administrators troubleshoot performance bottlenecks related to concurrent data access. ## Enable the `pgrowlocks` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pgrowlocks; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## The `pgrowlocks()` function `pgrowlocks` offers a single primary function to inspect row locks. ### Analyzing row locks with `pgrowlocks()` The `pgrowlocks(relation text)` function provides detailed information about currently held row-level locks on a specified table. ```sql SELECT * FROM pgrowlocks('your_table_name'); ``` Key columns in the output include: - `locked_row` (`tid`): The Tuple ID (physical location) of the locked row. - `locker` (`xid`): The Transaction ID (or Multixact ID if `multi` is true) of the transaction holding the lock - `multi` (`boolean`): True if `locker` is a Multixact ID (indicating multiple transactions might be involved, e.g., for shared locks). - `xids` (`xid[]`): An array of Transaction IDs that are holding locks on this specific row. This is particularly informative when `multi` is true. - `modes` (`text[]`): An array listing the lock modes held by the corresponding `xids` on the row. Common modes include `For Key Share`, `For Share`, `For No Key Update`, `For Update`, and `Update`. - `pids` (`integer[]`): An array of Process IDs (PIDs) of the backend database sessions holding the locks. This helps identify the specific connections. **Example: Observing active row locks** Let's set up a scenario to demonstrate `pgrowlocks`. First, create a simple `accounts` table: ```sql CREATE TABLE accounts ( account_id SERIAL PRIMARY KEY, owner_name TEXT, balance NUMERIC(10, 2) ); INSERT INTO accounts (owner_name, balance) VALUES ('Alice', 1000.00), ('Bob', 500.00), ('Charlie', 750.00); ``` Now, to create some row locks, you would typically use multiple database sessions. **Scenario setup (to be performed in separate `psql` sessions or database connections):** 1. **In Session 1:** Start a transaction and update Alice's account (e.g., her balance), but do not commit. This will place an exclusive lock on Alice's row. ```sql -- In Session 1 BEGIN; UPDATE accounts SET balance = balance - 100 WHERE account_id = 1; -- Do not COMMIT or ROLLBACK yet ``` 2. **In Session 2:** Start a transaction and select Bob's account with `FOR UPDATE`. This will place an exclusive lock on Bob's row. ```sql -- In Session 2 BEGIN; SELECT * FROM accounts WHERE account_id = 2 FOR UPDATE; -- Do not COMMIT or ROLLBACK yet ``` Now, **in a third session**, query `pgrowlocks` for the `accounts` table: ```sql -- In Session 3 SELECT * FROM pgrowlocks('accounts'); ``` Example output (the `locker` XIDs and `pids` will vary in your environment, and lock modes can differ based on the exact operations): ```text locked_row | locker | multi | xids | modes | pids ------------+--------+-------+-------+-------------------+-------- (0,1) | 767 | f | {767} | {"No Key Update"} | {1076} (0,2) | 768 | f | {768} | {"For Update"} | {429} (2 rows) ``` **Interpretation of the output:** - Row `(0,1)` is locked by transaction `767` (from Session 1), associated with process ID `1076`. The `modes` column shows `{"No Key Update"}`. This lock mode is often used by `UPDATE` statements when the update does **not** modify any columns that are part of a primary key or unique constraint. In our example, updating only the `balance` column would result in this lock mode. It's an exclusive lock preventing other modifications but is slightly less restrictive than `For Update` in some internal aspects. - Row `(0,2)` (Bob's account) is locked by transaction `768` (from Session 2), associated with process ID `429`. The `modes` column shows `{"For Update"}`. This lock mode is typically acquired by `SELECT ... FOR UPDATE` statements or by `UPDATE`/`DELETE` statements when key columns are involved or stronger locking is required. - `multi` is `f` (false) in both cases, indicating these are straightforward locks by single transactions. This output clearly shows which specific rows are locked, by which transactions, the precise mode of the lock, and the process IDs of the sessions holding the locks. **Note** Data Transience: The information from `pgrowlocks` is a real-time snapshot. It reflects the locks present at the exact moment of execution and does not store historical data. ## Practical usage examples ### Viewing the content of locked rows `pgrowlocks` shows which rows are locked (`locked_row` TID) but not their data. To see the actual data of the locked rows, you can join the `pgrowlocks` output with the table itself using the system column `ctid`. ```sql SELECT a.*, -- Select all columns from your table p.locker AS locking_transaction_id, p.modes AS lock_modes, p.pids AS locking_process_ids FROM accounts AS a, pgrowlocks('accounts') AS p WHERE p.locked_row = a.ctid; ``` **Example output:** ```text account_id | owner_name | balance | locking_transaction_id | lock_modes | locking_process_ids ------------+------------+---------+------------------------+-------------------+--------------------- 1 | Alice | 1000.00 | 1027 | {"No Key Update"} | {405} 2 | Bob | 500.00 | 1028 | {"For Update"} | {419} (2 rows) ``` **Warning** Performance Impact: This query can be very inefficient, especially on large tables, as `pgrowlocks` itself scans the table, and the join might add further overhead. Use it cautiously in production environments. ### Identifying blocking sessions and queries One of the most powerful uses of `pgrowlocks` is to help diagnose lock contention. By combining its output with `pg_stat_activity`, you can find out exactly which queries and users are involved in holding or waiting for row locks. ```sql SELECT p.locked_row, p.locker AS locking_transaction_id, p.modes AS lock_modes, act.pid AS locker_pid, act.usename AS locker_user, act.query AS locker_query, act.state AS locker_state, act.wait_event_type AS locker_wait_type, act.wait_event AS locker_wait_event FROM pgrowlocks('accounts') AS p JOIN pg_stat_activity AS act ON act.pid = ANY(p.pids); ``` The query above shows the details of the session(s) directly holding the row locks identified by `pgrowlocks`. To find sessions _blocked_ by these row locks, you would typically query `pg_locks` where `granted = false` and correlate the `transactionid`, `relation`, and potentially tuple information. **Example output:** ```text locked_row | locking_transaction_id | lock_modes | locker_pid | locker_user | locker_query | locker_state | locker_wait_type | locker_wait_event ------------+------------------------+-------------------+------------+--------------+-------------------------------------------------------------------+---------------------+------------------+------------------- (0,1) | 1029 | {"No Key Update"} | 1601 | neondb_owner | UPDATE accounts SET balance = balance - 100 WHERE account_id = 1; | idle in transaction | Client | ClientRead (0,2) | 1030 | {"For Update"} | 1629 | neondb_owner | SELECT * FROM accounts WHERE account_id = 2 FOR UPDATE; | idle in transaction | Client | ClientRead (2 rows) ``` This output provides a comprehensive view of the locking situation, including the user and query that are holding the locks. You can use this information to effectively communicate with your team or take action to resolve the contention. > You can now `COMMIT` or `ROLLBACK` the transactions in Session 1 and Session 2 to release their locks. ## Important considerations and limitations - **Lock acquisition**: `pgrowlocks` takes an `AccessShareLock` on the target table to read its rows. - **Blocking**: If an `ACCESS EXCLUSIVE` lock is held on the table (e.g., by an `ALTER TABLE` operation), `pgrowlocks` will be blocked until that exclusive lock is released. - **Performance**: `pgrowlocks` reads each row of the table to check for locks. This can be slow and resource-intensive on very large tables. ## Conclusion The `pgrowlocks` extension is a vital tool to diagnose row-level locking contention. By providing a clear view of which rows are locked, by whom, and in what mode, it helps developers and DBAs to quickly identify and resolve performance issues caused by concurrent data access patterns. While it should be used judiciously on large tables due to its scanning nature, its insights are invaluable for troubleshooting complex locking scenarios. ## Resources - [PostgreSQL `pgrowlocks` documentation](https://www.postgresql.org/docs/current/pgrowlocks.html) - [Monitor active queries on Neon: powered by `pg_stat_activity`](https://neon.com/docs/introduction/monitor-active-queries) - [MultiXacts in PostgreSQL: usage, side effects, and monitoring](https://aws.amazon.com/blogs/database/multixacts-in-postgresql-usage-side-effects-and-monitoring/) --- # Source: https://neon.com/llms/extensions-pgstattuple.txt # The pgstattuple extension > The document details the pgstattuple extension for Neon, which allows users to analyze PostgreSQL table and index bloat by providing statistics on tuple-level storage efficiency. ## Source - [The pgstattuple extension HTML](https://neon.com/docs/extensions/pgstattuple): The original HTML version of this documentation The `pgstattuple` extension provides a suite of functions to inspect the physical storage of Postgres tables and indexes at a detailed, tuple (row) level. It offers insights into issues like table and index bloat, fragmentation, and overall space utilization, which are crucial for performance tuning and storage management. ## Enable the `pgstattuple` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS pgstattuple; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## `pgstattuple` functions `pgstattuple` offers several functions to inspect different aspects of your database storage. ### Analyzing table statistics with `pgstattuple()` The `pgstattuple(relation regclass)` function provides detailed statistics about a table's physical storage. It performs a full scan of the relation. ```sql SELECT * FROM pgstattuple('your_table_name'); ``` Key columns in the output include: - `table_len`: Total size of the table on disk in bytes. - `tuple_count`: Number of live (visible) tuples. - `tuple_len`: Total length of live tuples in bytes. - `tuple_percent`: Percentage of space occupied by live tuples. - `dead_tuple_count`: Number of dead tuples (not yet vacuumed). - `dead_tuple_len`: Total length of dead tuples in bytes. - `dead_tuple_percent`: Percentage of space occupied by dead tuples. This is a direct indicator of bloat due to dead rows. - `free_space`: Total free space available within allocated pages in bytes (usable for future `INSERT`s/`UPDATE`s without extending the table). - `free_percent`: Percentage of total table space that is free. **Example: Observing table statistics and bloat** Let's create a `customers` table, populate it, delete some rows to create bloat, and then observe the statistics using `pgstattuple`. ```sql -- Create the customers table CREATE TABLE customers ( customer_id SERIAL PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100), email VARCHAR(255), phone VARCHAR(20), address VARCHAR(255), city VARCHAR(100), state VARCHAR(100), zip_code VARCHAR(20), created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW() ); -- Insert 10,000 rows into the customers table INSERT INTO customers (first_name, last_name, email, phone, address, city, state, zip_code, created_at) SELECT CASE (i % 10) WHEN 0 THEN 'John' WHEN 1 THEN 'Jane' WHEN 2 THEN 'Peter' WHEN 3 THEN 'Mary' WHEN 4 THEN 'Robert' WHEN 5 THEN 'Patricia' WHEN 6 THEN 'Michael' WHEN 7 THEN 'Linda' WHEN 8 THEN 'William' ELSE 'Elizabeth' END || '_' || i::TEXT, CASE (i % 10) WHEN 0 THEN 'Smith' WHEN 1 THEN 'Johnson' WHEN 2 THEN 'Williams' WHEN 3 THEN 'Jones' WHEN 4 THEN 'Brown' WHEN 5 THEN 'Davis' WHEN 6 THEN 'Miller' WHEN 7 THEN 'Wilson' WHEN 8 THEN 'Moore' ELSE 'Taylor' END || '_' || i::TEXT, 'customer' || i::TEXT || '@example.com', '555-' || LPAD((i % 10000)::TEXT, 4, '0'), (i * 10)::TEXT || ' Main St', CASE (i % 5) WHEN 0 THEN 'New York' WHEN 1 THEN 'Los Angeles' WHEN 2 THEN 'Chicago' WHEN 3 THEN 'Houston' ELSE 'Phoenix' END, CASE (i % 5) WHEN 0 THEN 'NY' WHEN 1 THEN 'CA' WHEN 2 THEN 'IL' WHEN 3 THEN 'TX' ELSE 'AZ' END, LPAD((i % 99999)::TEXT, 5, '0'), NOW() - (random() * INTERVAL '365 days') FROM generate_series(1, 10000) AS s(i); -- Delete half of the rows to create dead tuples DELETE FROM customers WHERE customer_id % 2 = 0; -- Check the table statistics before vacuuming SELECT * FROM pgstattuple('customers'); ``` Example output: ```text table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+----------------+------------------+----------------+--------------------+------------+-------------- 1343488 | 5000 | 638144 | 47.5 | 5000 | 645432 | 48.04 | 15320 | 1.14 ``` The output above (your values may vary) shows `dead_tuple_count` of 5000 and `dead_tuple_percent` around 48%. This high percentage of dead tuples indicates significant table bloat caused by the `DELETE` operation. Neon performs automatic `VACUUM` operations, but for immediate analysis or specific needs, you can run `VACUUM` manually. To reclaim the space occupied by these dead tuples for reuse within the table, run: ```sql VACUUM customers; ``` Now, let's check the statistics again: ```sql SELECT * FROM pgstattuple('customers'); ``` Example output (after `VACUUM`): ```text table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+----------------+------------------+----------------+--------------------+------------+-------------- 1343488 | 5000 | 638144 | 47.5 | 0 | 0 | 0 | 661080 | 49.21 ``` After `VACUUM`, `dead_tuple_count` is 0, and `dead_tuple_percent` is 0. The `free_space` (and `free_percent`) has increased significantly, indicating that the space previously occupied by dead tuples is now available for reuse by future `INSERT` or `UPDATE` operations on the `customers` table. **Note** Page Overhead: The `table_len` will always be greater than the sum of `tuple_len`, `dead_tuple_len`, and `free_space`. The difference accounts for page headers, per-page tuple pointers, and padding required for data alignment. ### Estimating table statistics with `pgstattuple_approx()` The `pgstattuple_approx(relation regclass)` function offers a faster way to get approximate table statistics. It tries to avoid full table scans by using the visibility map (VM) to skip pages known to contain only live tuples. For such pages, it estimates live tuple data from free space map information. Dead tuple statistics reported by this function are exact. ```sql SELECT * FROM pgstattuple_approx('your_table_name'); ``` This function is particularly useful for large tables where a full `pgstattuple()` scan would be too slow or resource-intensive for frequent checks. - Output columns are similar to `pgstattuple()`, but with `approx_` prefixes for estimated values (e.g., `approx_tuple_count`, `approx_free_space`). - `dead_tuple_count` and `dead_tuple_len` are exact. ### Analyzing B-tree index statistics with `pgstatindex()` The `pgstatindex(index regclass)` function provides statistics for B-tree indexes. ```sql SELECT * FROM pgstatindex('your_index_name'); ``` Key columns in the output include: - `version`: B-tree version number. - `tree_level`: Level of the B-tree (0 for an empty index, 1 for a root page with leaves, etc.). - `index_size`: Total size of the index on disk in bytes. - `leaf_pages`: Number of leaf pages (where actual index entries pointing to table rows are stored). - `internal_pages`: Number of internal (non-leaf) pages. - `empty_pages`: Number of completely empty pages within the index. - `deleted_pages`: Number of pages marked as deleted but not yet reclaimed. - `avg_leaf_density`: Average fullness of leaf pages as a percentage. Lower values can indicate bloat or inefficient space usage. - `leaf_fragmentation`: A measure of logical fragmentation of leaf pages. Higher values indicate that logically sequential leaf pages might be physically distant on disk, which can impact scan performance. **Example: Observing Index statistics** ```sql -- Create an index on the customers table CREATE INDEX idx_customers_first_name ON customers (first_name); -- Check index statistics SELECT * FROM pgstatindex('idx_customers_first_name'); ``` Example output (your values may vary): ```text version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation ---------+------------+------------+----------------+----------------+------------+-------------+---------------+------------------+------------------- 4 | 1 | 180224 | 3 | 1 | 20 | 0 | 0 | 86.14 | 0 ``` **Note** Other Index Types: The `pgstattuple` extension also provides `pgstatginindex()` for GIN indexes and `pgstathashindex()` for HASH indexes. While these functions also report on storage characteristics, the specific metrics returned are tailored to the internal structure of GIN and HASH indexes, respectively. ## Practical usage examples ### Detecting and managing table bloat Table bloat occurs primarily when `UPDATE` or `DELETE` operations are performed. `UPDATE` operations in Postgres internally perform a `DELETE` of the old row version and an `INSERT` of the new row version. The space occupied by these "dead" (old or deleted) tuples is not immediately reclaimed by the operating system. Instead, it remains within the table's allocated pages, potentially leading to larger table sizes than necessary and reduced query performance due to scanning more pages. `pgstattuple` helps quantify this bloat. In our `customers` table example: 1. We inserted 10,000 rows. 2. We then deleted half of these rows (`DELETE FROM customers WHERE customer_id % 2 = 0;`). These 5,000 deleted rows become "dead tuples". The `pgstattuple('customers')` output _before_ running `VACUUM` was: ```text table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+----------------+------------------+----------------+--------------------+------------+-------------- 1343488 | 5000 | 638144 | 47.5 | 5000 | 645432 | 48.04 | 15320 | 1.14 ``` Here's how to interpret this: - `dead_tuple_count` is 5000, matching the number of rows we deleted. - `dead_tuple_percent` is 48.04%, indicating that nearly half the space within the data pages (excluding page overhead and existing free space) is occupied by these dead tuples. This is a clear sign of significant bloat. - `free_percent` is low (1.14%), meaning there isn't much readily available space within the existing pages for new data _before_ a `VACUUM`. Upon identifying bloat, `VACUUM` is the standard command to reclaim this space _for reuse by Postgres_. A standard `VACUUM` marks the space occupied by dead tuples as free, making it available for future `INSERT`s and `UPDATE`s on the same table. It typically does not shrink the table file on disk (i.e., `table_len` often remains the same). After running `VACUUM customers;`, the `pgstattuple('customers')` output showed: ```text table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+----------------+------------------+----------------+--------------------+------------+-------------- 1343488 | 5000 | 638144 | 47.5 | 0 | 0 | 0 | 661080 | 49.21 ``` Observations: - `dead_tuple_count` and `dead_tuple_percent` are now 0, confirming the dead tuples have been processed. - `free_percent` has increased dramatically to 49.21%. This space, previously held by dead tuples, is now marked as free and can be reused by new rows or updates to existing rows in the `customers` table without requiring Postgres to request more disk space from the OS for this table immediately. - `table_len` (1343488) remained the same, which is typical for a standard `VACUUM`. To return space to the operating system and reduce `table_len`, you would need `VACUUM FULL` or tools like `pg_repack`, but these come with different locking implications. To run `VACUUM FULL`, which compacts the table and returns space to the OS, you would use: ```sql VACUUM FULL customers; ``` **Warning**: `VACUUM FULL` requires an exclusive lock on the table, which can block other operations. Use it judiciously, especially in production environments. Running the `pgstattuple('customers')` again after `VACUUM FULL` would show a reduced `table_len`, indicating that the space has been returned to the operating system. ```sql SELECT * FROM pgstattuple('customers'); ``` **Example output (after `VACUUM FULL`)**: ```text table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 671744 | 5000 | 638144 | 95 | 0 | 0 | 0 | 11304 | 1.68 (1 row) ``` ### Identifying top bloated tables You can query `pg_class` and use `pgstattuple` functions to find the most bloated tables in your database. This query focuses on the actual space occupied by dead tuples. ```sql SELECT c.relname AS table_name, pg_size_pretty(s.table_len) AS total_table_size, round(s.dead_tuple_percent::numeric, 2) AS dead_tuple_percentage, pg_size_pretty(s.dead_tuple_len) AS space_occupied_by_dead_tuples, round(s.free_percent::numeric, 2) AS free_space_percentage FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace CROSS JOIN LATERAL pgstattuple(c.oid::regclass) s -- For better performance on large DBs, consider pgstattuple_approx(c.oid::regclass) s WHERE c.relkind IN ('r', 'm') -- r = ordinary table, m = materialized view AND n.nspname NOT IN ('pg_catalog', 'information_schema') -- Exclude system schemas AND n.nspname NOT LIKE 'pg_toast%' -- Exclude TOAST tables AND n.nspname NOT LIKE 'pg_temp_%' -- Exclude temporary schemas AND s.dead_tuple_len > 0 -- Only consider tables with some dead tuple space ORDER BY s.dead_tuple_len DESC -- Order by the tables with the most space taken by dead tuples LIMIT 10; ``` **Warning** Resource Intensive Query: Running `pgstattuple()` for every table can be very resource-intensive. For larger databases, consider using `pgstattuple_approx()` in the `CROSS JOIN LATERAL` subquery or filtering tables by size first (e.g., adding `AND pg_total_relation_size(c.oid) > '1GB'` to the `WHERE` clause). ### Diagnosing and resolving index bloat and fragmentation For B-tree indexes, low `avg_leaf_density` or high `leaf_fragmentation` can indicate performance issues. ```sql SELECT index_size, leaf_pages, avg_leaf_density, leaf_fragmentation FROM pgstatindex('idx_customers_first_name'); ``` If `avg_leaf_density` is low (e.g., < 60-70%) or `leaf_fragmentation` is high (e.g., > 20-30% for frequently scanned indexes), the index might benefit from rebuilding. To rebuild an index: ```sql REINDEX INDEX idx_customers_first_name; ``` After reindexing, check `pgstatindex` again; you should see improved `avg_leaf_density` (closer to 90%) and reduced `leaf_fragmentation`. ## Best practices - **Resource Intensive:** `pgstattuple()` performs a full table/index scan, which can be I/O and CPU intensive, especially on large objects. `pgstattuple_approx()` is faster but still reads a portion of the table. - **Run off-peak:** Schedule `pgstattuple` analysis during low-traffic periods to minimize impact on production workloads. - **Target specific objects:** Instead of scanning all tables/indexes, focus on known large or frequently modified objects, or those identified as problematic by other monitoring tools. - **Combine with `pg_repack`:** For online table and index reorganization to remove bloat without extensive locking (unlike `VACUUM FULL` or `REINDEX`), consider the [`pg_repack` extension](https://neon.com/docs/extensions/pg_repack). ## Conclusion The `pgstattuple` extension is a powerful diagnostic tool for understanding the physical storage characteristics of your Postgres database within Neon. It allows you to identify and quantify table and index bloat and fragmentation, leading to more effective maintenance strategies, better autovacuum tuning, and ultimately, improved database performance and storage efficiency. ## Resources - [PostgreSQL documentation for pgstattuple](https://www.postgresql.org/docs/current/pgstattuple.html) --- # Source: https://neon.com/llms/extensions-pgvector.txt # The pgvector extension > The document details the pgvector extension for Neon, enabling users to store and perform similarity searches on vector data within PostgreSQL databases. ## Source - [The pgvector extension HTML](https://neon.com/docs/extensions/pgvector): The original HTML version of this documentation The `pgvector` extension enables you to store vector embeddings and perform vector similarity search in Postgres. It is particularly useful for applications involving natural language processing, such as those built on top of OpenAI's GPT models. `pgvector` supports: - Exact and approximate nearest neighbor search - Single-precision, half-precision, binary, and sparse vectors - L2 distance, inner product, cosine distance, L1 distance, Hamming distance, and Jaccard distance - Any language with a Postgres client - ACID compliance, point-in-time recovery, JOINs, and all other Postgres features This topic describes how to enable the `pgvector` extension in Neon and how to create, store, and query vectors. ## Enable the pgvector extension You can enable the `pgvector` extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to Neon. ```sql CREATE EXTENSION vector; ``` ## Use a previous version of pgvector Neon allows you to install the previous version of `pgvector`, which is one version behind the latest supported version. For example, if Neon's latest supported `pgvector` version is 0.8.0, you can install the prior version, 0.7.4, by specifying the version number: ```sql CREATE EXTENSION vector VERSION '0.7.4'; ``` To check the latest supported `pgvector` version on Neon, visit our [Postgres extensions page](https://neon.com/docs/extensions/pg-extensions). You can install one version back from that version. For a full version history, see the [pgvector changelog](https://github.com/pgvector/pgvector/blob/master/CHANGELOG.md). Note that `pgvector` versions are not always sequential — for example, version 0.7.4 was followed by 0.8.0. ## Create a table to store vectors To create a table for storing vectors, you would use an SQL command similar to the following. Embeddings are stored in the `VECTOR` type column. You can adjust the number of dimensions as needed. ```sql CREATE TABLE items ( id BIGSERIAL PRIMARY KEY, embedding VECTOR(3) ); ``` **Note**: The `pgvector` extension supports some specialized types other than `VECTOR` for storing embeddings. See [HNSW vector types](https://neon.com/docs/extensions/pgvector#hnsw-vector-types), and [IVFFlat vector types](https://neon.com/docs/extensions/pgvector#ivfflat-vector-types). This command generates a table named `items` with an `embedding` column capable of storing vectors with 3 dimensions. OpenAI's `text-embedding-3-small` model supports 1536 dimensions by default for each piece of text, which creates more accurate embeddings for natural language processing tasks. However, using larger embeddings generally costs more and consumes more compute, memory, and storage than using smaller embeddings. To learn more about embeddings and the cost-performance tradeoff, see [Embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings), in the _OpenAI documentation_. ## Storing embeddings After generating embeddings using a service like [OpenAI's Embeddings API](https://platform.openai.com/docs/api-reference/embeddings), you can store them in your database. Using a Postgres client library in your preferred programming language, you can execute an `INSERT` statement similar to the following to store embeddings. - Insert two new rows into the `items` table with the provided embeddings. ```sql INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]'); ``` - Load vectors in bulk using the `COPY` command: ```sql COPY items (embedding) FROM STDIN WITH (FORMAT BINARY); ``` **Tip**: For a Python script that loads embeddings in bulk, refer to this [Bulk loading with COPY](https://github.com/pgvector/pgvector-python/blob/master/examples/loading/example.py) example provided in the `pgvector` GitHub repository. - Upsert vectors: ```sql INSERT INTO items (id, embedding) VALUES (1, '[1,2,3]'), (2, '[4,5,6]') ON CONFLICT (id) DO UPDATE SET embedding = EXCLUDED.embedding; ``` - Update vectors: ```sql UPDATE items SET embedding = '[1,2,3]' WHERE id = 1; ``` - Delete vectors: ```sql DELETE FROM items WHERE id = 1; ``` ## Querying vectors To retrieve vectors and calculate similarity, use `SELECT` statements and the distance function operators supported by `pgvector`. - Get the nearest neighbor to a vector by L2 distance: ```sql SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5; ``` - Get the nearest neighbor to a row by L2 distance: ```sql SELECT * FROM items WHERE id != 1 ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5; ``` - Get rows within a certain distance by L2 distance: ```sql SELECT * FROM items WHERE embedding <-> '[3,1,2]' < 5; ``` **Note**: To use an index with a query, include `ORDER BY` and `LIMIT` clauses, as shown in the second query example above. ### Distance function operators - `<->` - L2 distance - `<#>` - (negative) inner product - `<=>` - cosine distance - `<+>` - L1 distance **Note**: The inner product operator (`<#>`) returns the negative inner product since Postgres only supports `ASC` order index scans on operators. ### Distance queries - Get the distances: ```sql SELECT embedding <-> '[3,1,2]' AS distance FROM items; ``` - For inner product, multiply by `-1` (since `<#>` returns the negative inner product): ```sql SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items; ``` - For cosine similarity, use `1 -` cosine distance: ```sql SELECT 1 - (embedding <=> '[3,1,2]') AS cosine_similarity FROM items; ``` ### Aggregate queries - To average vectors: ```sql SELECT AVG(embedding) FROM items; ``` - To average groups of vectors: ```sql SELECT category_id, AVG(embedding) FROM items GROUP BY category_id; ``` ## Indexing vectors By default, `pgvector` performs exact nearest neighbor search, providing perfect recall. Adding an index on the vector column can improve query performance with a minor cost in recall. Unlike typical indexes, you will see different results for queries after adding an approximate index. Supported index types include: - [HNSW](https://neon.com/docs/extensions/pgvector#hnsw) - [IVFFLAT](https://neon.com/docs/extensions/pgvector#ivfflat) ### HNSW An HNSW index creates a multilayer graph. It has better query performance than an IVFFlat index (in terms of speed-recall tradeoff), but has slower build times and uses more memory. Also, an HNSW index can be created without any data in the table since there isn't a training step like there is for an IVFFlat index. #### HNSW vector types HNSW indexes are supported with the following vector types: - `vector` - up to 2,000 dimensions - `halfvec` - up to 4,000 dimensions - `bit` - up to 64,000 dimensions - `sparsevec` - up to 1,000 non-zero elements **Note**: Notice how indexes are defined differently depending on the distance function being used. For example `vector_l2_ops` is specified for L2 distance, `vector_ip_ops` for inner product, and so on. Make sure you define your index according to the distance function you intend to use. - L2 distance: ```sql CREATE INDEX ON items USING hnsw (embedding vector_l2_ops); ``` - Inner product: ```sql CREATE INDEX ON items USING hnsw (embedding vector_ip_ops); ``` - Cosine distance: ```sql CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops); ``` - L1 distance: ```sql CREATE INDEX ON items USING hnsw (embedding vector_l1_ops); ``` - Hamming distance: ```sql CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops); ``` - Jaccard distance: ```sql CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops); ``` #### HNSW index build options - `m` - the max number of connections per layer (16 by default) - `ef_construction` - the size of the dynamic candidate list for constructing the graph (`64` by default) This example demonstrates how to set the parameters: ```sql CREATE INDEX ON items USING hnsw (embedding vector_l2_ops) WITH (m = 16, ef_construction = 64); ``` A higher value of `ef_construction` provides better recall at the cost of index build time and insert speed. #### HNSW index query options You can specify the size of the candidate list for search. The size is `40` by default. ```sql SET hnsw.ef_search = 100; ``` A higher value provides better recall at the cost of speed. This query shows how to use `SET LOCAL` inside a transaction to set `ef_search` for a single query: ```sql BEGIN; SET LOCAL hnsw.ef_search = 100; SELECT ... COMMIT; ``` #### HNSW index build time To optimize index build time, consider configuring the `maintenance_work_mem` and `max_parallel_maintenance_workers` session variables before building an index: **Note**: Like other index types, it's faster to create an index after loading your initial data. - `maintenance_work_mem` Indexes build significantly faster when the graph fits into Postgres `maintenance_work_mem`. A notice is shown when the graph no longer fits: ```text NOTICE: hnsw graph no longer fits into maintenance_work_mem after 100000 tuples DETAIL: Building will take significantly more time. HINT: Increase maintenance_work_mem to speed up builds. ``` In Postgres, the `maintenance_work_mem` setting determines the maximum memory allocation for tasks such as `CREATE INDEX`. The default `maintenance_work_mem` value in Neon is set according to your Neon [compute size](https://neon.com/docs/manage/computes#how-to-size-your-compute). To optimize `pgvector` index build time, you can increase the `maintenance_work_mem` setting for the current session with a command similar to the following: ```sql SET maintenance_work_mem='10 GB'; ``` The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM. For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size. - `max_parallel_maintenance_workers` You can also speed up index creation by increasing the number of parallel workers. The default is `2`. The `max_parallel_maintenance_workers` sets the maximum number of parallel workers that can be started by a single utility command such as `CREATE INDEX`. By default, the `max_parallel_maintenance_workers` setting is `2`. For efficient parallel index creation, you can increase this setting. Parallel workers are taken from the pool of processes established by `max_worker_processes` (`10`), limited by `max_parallel_workers` (`8`). You can increase the `maintenance_work_mem` setting for the current session with a command similar to the following: ```sql SET max_parallel_maintenance_workers = 7 ``` For example, if you have a 7 CU compute size, you could set `max_parallel_maintenance_workers` to 7, before index creation, to make use of all of the vCPUs available. For a large number of workers, you may also need to increase the Postgres `max_parallel_workers`, which is `8` by default. #### Check indexing progress You can check indexing progress with the following query: ```sql SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%" FROM pg_stat_progress_create_index; ``` The phases for HNSW are: 1. initializing 2. loading tuples For related information, see [CREATE INDEX Progress Reporting](https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING), in the _PostgreSQL documentation_. ### IVFFlat An IVFFlat index divides vectors into lists and searches a subset of those lists that are closest to the query vector. It has faster build times and uses less memory than HNSW, but has lower query performance with respect to the speed-recall tradeoff. Keys to achieving good recall include: - Creating the index after the table has some data - Choosing an appropriate number of lists. A good starting point is rows/1000 for up to 1M rows and `sqrt(rows)` for over 1M rows. - Specifying an appropriate number of [probes](https://neon.com/docs/extensions/pgvector#hnsw-query-options) when querying. A higher number is better for recall, and a lower is better for speed. A good starting point is `sqrt(lists)`. #### IVFFlat vector types IVFFlat indexes are supported with the following vector types: - `vector` - up to 2,000 dimensions - `halfvec` - up to 4,000 dimensions (added in 0.7.0) - `bit` - up to 64,000 dimensions (added in 0.7.0) The following examples show how to add an index for each distance function: **Note**: Notice how indexes are defined differently depending on the distance function being used. For example `vector_l2_ops` is specified for L2 distance, `vector_cosine_ops` for cosine distance, and so on. The following examples show how to add an index for each distance function: - L2 distance ```sql CREATE INDEX ON items USING ivfflat (embedding vector_l2_ops) WITH (lists = 100); ``` **Note**: Use `halfvec_l2_ops` for halfvec (and similar with the other distance functions). - Inner product ```sql CREATE INDEX ON items USING ivfflat (embedding vector_ip_ops) WITH (lists = 100); ``` - Cosine distance ```sql CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100); ``` - Hamming distance ```sql CREATE INDEX ON items USING ivfflat (embedding bit_hamming_ops) WITH (lists = 100); ``` #### IVFFlat query options You can specify the number of probes, which is `1` by default. ```sql SET ivfflat.probes = 10; ``` A higher value provides better recall at the cost of speed. You can set the value to the number of lists for exact nearest neighbor search, at which point the planner won't use the index. You can also use `SET LOCAL` inside a transaction to set the number of probes for a single query: ```sql BEGIN; SET LOCAL ivfflat.probes = 10; SELECT ... COMMIT; ``` #### IVFFlat index build time To optimize index build time, consider configuring the `maintenance_work_mem` and `max_parallel_maintenance_workers` session variables before building an index: **Note**: Like other index types, it's faster to create an index after loading your initial data. **Note**: Like other index types, it's faster to create an index after loading your initial data. - `maintenance_work_mem` In Postgres, the `maintenance_work_mem` setting determines the maximum memory allocation for tasks such as `CREATE INDEX`. The default `maintenance_work_mem` value in Neon is set according to your Neon [compute size](https://neon.com/docs/manage/computes#how-to-size-your-compute). For a table that shows the `maintenance_work_mem` setting by compute size, see [Parameter settings that differ by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). To optimize `pgvector` index build time, you can increase the `maintenance_work_mem` setting for the current session with a command similar to the following: ```sql SET maintenance_work_mem='10 GB'; ``` The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM. For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size. - `max_parallel_maintenance_workers` You can also speed up index creation by increasing the number of parallel workers. The default is `2`. The `max_parallel_maintenance_workers` sets the maximum number of parallel workers that can be started by a single utility command such as `CREATE INDEX`. By default, the `max_parallel_maintenance_workers` setting is `2`. For efficient parallel index creation, you can increase this setting. Parallel workers are taken from the pool of processes established by `max_worker_processes` (`10`), limited by `max_parallel_workers` (`8`). You can increase the `maintenance_work_mem` setting for the current session with a command similar to the following: ```sql SET max_parallel_maintenance_workers = 7 ``` For example, if you have a 7 CU compute size, you could set `max_parallel_maintenance_workers` to 7, before index creation, to make use of all of the vCPUs available. For a large number of workers, you may also need to increase the Postgres `max_parallel_workers`, which is `8` by default. #### Check indexing progress You can check indexing progress with the following query: ```sql SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%" FROM pg_stat_progress_create_index; ``` The phases for HNSW are: 1. initializing 2. loading tuples For related information, see [CREATE INDEX Progress Reporting](https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING), in the _PostgreSQL documentation_. ## Filtering There are a few ways to index nearest neighbor queries with a `WHERE` clause: ```sql SELECT * FROM items WHERE category_id = 123 ORDER BY embedding <-> '[3,1,2]' LIMIT 5; ``` Create an index on one or more of the `WHERE` columns for exact search" ```sql CREATE INDEX ON items (category_id); ``` Create a [partial index](https://www.postgresql.org/docs/current/indexes-partial.html) on the vector column for approximate search: ```sql CREATE INDEX ON items USING hnsw (embedding vector_l2_ops) WHERE (category_id = 123); ``` Use [partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) for approximate search on many different values of the `WHERE` columns: ```sql CREATE TABLE items (embedding vector(3), category_id int) PARTITION BY LIST(category_id); ``` ## Half-precision vectors Half-precision vectors enable the storage of vector embeddings using 16-bit floating-point numbers, or half-precision, which reduces both storage size and memory usage by nearly half compared 32-bit floats. This efficiency comes with minimal loss in precision, making half-precision vectors beneficial for applications dealing with large datasets or facing memory constraints. When integrating OpenAI's embeddings, you can take advantage of half-precision vectors by storing embeddings in a compressed format. For instance, OpenAI's high-dimensional embeddings can be effectively stored with half-precision vectors, achieving high levels of accuracy, such as a 98% rate. This approach optimizes memory usage while maintaining performance. You can use the `halfvec` type to store half-precision vectors, as shown here: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding halfvec(3)); ``` ## Binary vectors Binary vector embeddings are a form of vector representation where each component is encoded as a binary digit, typically 0 or 1. For example, the word "cat" might be represented as `[0, 1, 0, 1, 1, 0, 0, 1, ...],` with each position in the vector being binary. These embeddings are advantageous for their efficiency in both storage and computation. Because they use only one bit per dimension, binary embeddings require less memory compared to traditional embeddings that use floating-point numbers. This makes them useful when there is limited memory or when dealing with large datasets. Additionally, operations with binary values are generally quicker than those involving real numbers, leading to faster computations. However, the trade-off with binary vector embeddings is a potential loss in accuracy. Unlike denser embeddings, which have real-valued entries and can represent subtleties in the data, binary embeddings simplify the representation. This can result in a loss of information and may not fully capture the intricacies of the data they represent. Use the `bit` type to store binary vector embeddings: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding bit(3)); INSERT INTO items (embedding) VALUES ('000'), ('111'); ``` Get the nearest neighbors by Hamming distance (added in 0.7.0) ```sql SELECT * FROM items ORDER BY embedding <~> '101' LIMIT 5; ``` Or (before 0.7.0) ```sql SELECT * FROM items ORDER BY bit_count(embedding # '101') LIMIT 5; ``` Jaccard distance (`<%>`) is also supported with binary vector embeddings. ## Binary quantization Binary quantization is a process that transforms dense or sparse embeddings into binary representations by thresholding vector dimensions to either 0 or 1. Use expression indexing for binary quantization: ```sql CREATE INDEX ON items USING hnsw ((binary_quantize(embedding)::bit(3)) bit_hamming_ops); ``` Get the nearest neighbors by Hamming distance: ```sql SELECT * FROM items ORDER BY binary_quantize(embedding)::bit(3) <~> binary_quantize('[1,-2,3]') LIMIT 5; ``` Re-rank by the original vectors for better recall: ```sql SELECT * FROM ( SELECT * FROM items ORDER BY binary_quantize(embedding)::bit(3) <~> binary_quantize('[1,-2,3]') LIMIT 20 ) ORDER BY embedding <=> '[1,-2,3]' LIMIT 5; ``` ## Sparse vectors Sparse vectors have a large number of dimensions, where only a small proportion are non-zero. Use the `sparsevec` type to store sparse vectors: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding sparsevec(5)); ``` Insert vectors: ```sql INSERT INTO items (embedding) VALUES ('{1:1,3:2,5:3}/5'), ('{1:4,3:5,5:6}/5'); ``` The format is `{index1:value1,index2:value2}/dimensions` and indices start at 1 like SQL arrays. Get the nearest neighbors by L2 distance: ```sql SELECT * FROM items ORDER BY embedding <-> '{1:3,3:1,5:2}/5' LIMIT 5; ``` ## Differences in behaviour between pgvector 0.5.1 and 0.7.0 Differences in behavior in the following corner cases were found during our testing of `pgvector` 0.7.0: ### Distance between a valid and NULL vector The distance between a valid and `NULL` vector (`NULL::vector`) with `pgvector` 0.7.0 differs from `pgvector` 0.5.1 when using an HNSW or IVFFLAT index, as shown in the following examples: **HNSW** For the following script, comparing the `NULL::vector` to non-null vectors the resulting output changes: ```sql SET enable_seqscan = off; CREATE TABLE t (val vector(3)); INSERT INTO t (val) VALUES ('[0,0,0]'), ('[1,2,3]'), ('[1,1,1]'), (NULL); CREATE INDEX ON t USING hnsw (val vector_l2_ops); INSERT INTO t (val) VALUES ('[1,2,4]'); SELECT * FROM t ORDER BY val <-> (SELECT NULL::vector); ``` `pgvector` 0.7.0 output: ``` val --------- [1,1,1] [1,2,4] [1,2,3] [0,0,0] ``` `pgvector` 0.5.1 output: ``` val --------- [0,0,0] [1,1,1] [1,2,3] [1,2,4] ``` **IVFFLAT** For the following script, comparing the `NULL::vector` to non-null vectors the resulting output changes: ```sql SET enable_seqscan = off; CREATE TABLE t (val vector(3)); INSERT INTO t (val) VALUES ('[0,0,0]'), ('[1,2,3]'), ('[1,1,1]'), (NULL); CREATE INDEX ON t USING ivfflat (val vector_l2_ops) WITH (lists = 1); INSERT INTO t (val) VALUES ('[1,2,4]'); SELECT * FROM t ORDER BY val <-> (SELECT NULL::vector); ``` `pgvector` 0.7.0 output: ```sql val --------- [0,0,0] [1,2,3] [1,1,1] [1,2,4] ``` `pgvector` 0.5.1 output: ```sql val --------- [0,0,0] [1,1,1] [1,2,3] [1,2,4] ``` ### Error messages improvement for invalid literals If you use an invalid literal value for the `vector` data type, you will now see the following error message: ```sql SELECT '[4e38,1]'::vector; ERROR: "4e38" is out of range for type vector LINE 1: SELECT '[4e38,1]'::vector; ``` ## Resources `pgvector` source code: [https://github.com/pgvector/pgvector](https://github.com/pgvector/pgvector) --- # Source: https://neon.com/llms/extensions-postgis-related-extensions.txt # PostGIS-related extensions > This document details the PostGIS-related extensions available in Neon, outlining their installation and usage to enhance spatial data capabilities within the platform. ## Source - [PostGIS-related extensions HTML](https://neon.com/docs/extensions/postgis-related-extensions): The original HTML version of this documentation PostGIS adds support for geospatial data in PostgreSQL, providing both data types and functions to store and analyze it effectively. The Postgres ecosystem includes multiple extensions built on top of PostGIS, to further enhance its capabilities. This guide introduces you to some of these extensions supported by Neon: - [pgrouting](https://neon.com/docs/extensions/postgis-related-extensions#pgrouting) - [H3_PostGIS](https://neon.com/docs/extensions/postgis-related-extensions#h3-and-h3-postgis) - [PostGIS SFCGAL](https://neon.com/docs/extensions/postgis-related-extensions#postgis-sfcgal) - [PostGIS Tiger Geocoder](https://neon.com/docs/extensions/postgis-related-extensions#postgis-tiger-geocoder) These extensions offer specialized functionality for routing, hierarchical geospatial indexing, advanced geometric operations, and geocoding. We'll explore how to enable these extensions and provide examples of common use cases. **Note**: These extensions are open-source and can be installed on any Neon Project using the instructions below. For detailed installation instructions, please refer to the documentation for each extension. **Version availability:** For up-to-date information on supported versions for each extension, refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon. ## Enable the PostGIS extension The extensions listed below typically need `PostGIS` to be installed first, or work in conjunction with it. You can enable `PostGIS` by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION IF NOT EXISTS postgis; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## pgrouting `pgrouting` extends PostGIS to provide geospatial routing and network analysis functionality. It's useful for applications involving transportation networks, logistics planning, and urban mobility analysis. ### Enable the pgrouting extension Enable the extension by running the following SQL statement: ```sql CREATE EXTENSION IF NOT EXISTS pgrouting; ``` ### Example usage Let's consider a scenario where we need to find the shortest path between two points in a road network. **Create a table with road network data** ```sql -- Create a table to store road network data DROP TABLE IF EXISTS road_network; CREATE TABLE road_network ( id SERIAL PRIMARY KEY, name VARCHAR(100), source INTEGER, target INTEGER, cost FLOAT, reverse_cost FLOAT, geom GEOMETRY(LINESTRING, 4326) ); -- Insert sample data, representing a simplified road network INSERT INTO road_network (name, source, target, cost, reverse_cost, geom) VALUES ('Main St', 1, 2, 0.5, 0.5, ST_GeomFromText('LINESTRING(-73.98 40.75, -73.97 40.75)', 4326)), ('Broadway', 2, 3, 0.8, 0.8, ST_GeomFromText('LINESTRING(-73.97 40.75, -73.96 40.76)', 4326)), ('5th Ave', 4, 5, 0.7, 0.7, ST_GeomFromText('LINESTRING(-73.97 40.77, -73.98 40.76)', 4326)), ('Central Park W', 5, 1, 0.9, 0.9, ST_GeomFromText('LINESTRING(-73.98 40.76, -73.98 40.75)', 4326)), ('3rd Ave', 2, 5, 1.3, 1.3, ST_GeomFromText('LINESTRING(-73.97 40.75, -73.98 40.76)', 4326)), ('Park Dr N', 4, 1, 1.4, 1.4, ST_GeomFromText('LINESTRING(-73.97 40.77, -73.98 40.75)', 4326)); ``` This dataset represents a simplified road network with 6 road segments connecting 5 intersections. **Use pgrouting to find the shortest path between nodes** We can use pgrouting's `pgr_dijkstra` function to find the shortest path between two nodes: ```sql SELECT seq, node, edge, route.cost, agg_cost, rn.name AS road_name FROM pgr_dijkstra( 'SELECT id, source, target, cost FROM road_network', 2, -- start node 4, -- end node directed := false ) AS route LEFT JOIN road_network rn ON route.edge = rn.id ORDER BY seq; ``` This query returns the sequence of edges that form the shortest path from node 2 to node 4. ```text seq | node | edge | cost | agg_cost | road_name -----+------+------+------+----------+----------- 1 | 2 | 1 | 0.5 | 0 | Main St 2 | 1 | 6 | 1.4 | 0.5 | Park Dr N 3 | 4 | -1 | 0 | 1.9 | ``` **Use pgrouting to find alternative routes** For navigation applications, we might need to find multiple alternative routes. We can use the `pgr_ksp` function to find the K-shortest paths between two nodes: ```sql SELECT route.path_id, route.path_seq, route.node, route.edge, route.cost, route.agg_cost, rn.name AS road_name FROM pgr_ksp( 'SELECT id, source, target, cost, reverse_cost FROM road_network', 1, -- start node 4, -- end node 2, -- number of alternative paths directed := false, heap_paths := false ) AS route LEFT JOIN road_network rn ON route.edge = rn.id ORDER BY route.path_id, route.path_seq; ``` This query returns two sequence of edges, that can be used to go from node 1 to node 4. ```text path_id | path_seq | node | edge | cost | agg_cost | road_name ---------+----------+------+------+------+----------+---------------- 1 | 1 | 1 | 6 | 1.4 | 0 | Park Dr N 1 | 2 | 4 | -1 | 0 | 1.4 | 2 | 1 | 1 | 4 | 0.9 | 0 | Central Park W 2 | 2 | 5 | 3 | 0.7 | 0.9 | 5th Ave 2 | 3 | 4 | -1 | 0 | 1.6 | ``` ## H3 and H3 PostGIS H3 is a hierarchical geospatial indexing system. It divides the earth's surface into hexagonal cells at multiple resolutions, and provides a unique addressing system for location data. It is used for applications like optimizing delivery zones and service areas, geospatial aggregation, and analytics. The H3 functionality is split into two extensions: `h3` and `h3_postgis`. ### Enable the H3 and H3_PostGIS extensions Enable these extensions by running the following SQL statements: ```sql CREATE EXTENSION IF NOT EXISTS h3 CASCADE; CREATE EXTENSION IF NOT EXISTS h3_postgis CASCADE; ``` ### Example usage We will show how to use H3 to analyze ride-sharing data in a large city, focusing on the distribution of pickup locations. **Create a table with pickup location data** ```sql DROP TABLE IF EXISTS ride_pickups; CREATE TABLE ride_pickups ( id SERIAL PRIMARY KEY, pickup_time TIMESTAMP, pickup_location GEOMETRY(POINT, 4326) ); -- Insert sample data INSERT INTO ride_pickups (pickup_time, pickup_location) VALUES ('2023-06-15 08:30:00', ST_SetSRID(ST_MakePoint(-73.9812, 40.7657), 4326)), ('2023-06-15 09:15:00', ST_SetSRID(ST_MakePoint(-73.9815, 40.7659), 4326)), ('2023-06-15 10:00:00', ST_SetSRID(ST_MakePoint(-73.9810, 40.7655), 4326)), ('2023-06-15 11:30:00', ST_SetSRID(ST_MakePoint(-73.9934, 40.7505), 4326)), ('2023-06-15 12:45:00', ST_SetSRID(ST_MakePoint(-73.9937, 40.7508), 4326)), ('2023-06-15 14:00:00', ST_SetSRID(ST_MakePoint(-74.0060, 40.7128), 4326)), ('2023-06-15 15:30:00', ST_SetSRID(ST_MakePoint(-73.9619, 40.7681), 4326)), ('2023-06-15 17:00:00', ST_SetSRID(ST_MakePoint(-73.9622, 40.7683), 4326)), ('2023-06-15 18:30:00', ST_SetSRID(ST_MakePoint(-73.9840, 40.7549), 4326)), ('2023-06-15 20:00:00', ST_SetSRID(ST_MakePoint(-73.9887, 40.7229), 4326)); ``` This dataset represents the pickup locations for a ride-sharing service in a large city. **Convert points to H3 indexes** We can use the `h3_lat_lng_to_cell` function to convert lat/long coordinates to H3 indexes: ```sql SELECT h3_lat_lng_to_cell(pickup_location, 9) AS h3_index FROM ride_pickups ORDER BY RANDOM() LIMIT 5; ``` This query converts each pickup location to an H3 index at resolution 9. ```text h3_index ----------------- 892a100d2cbffff 892a1072893ffff 892a100d693ffff 892a100d2cbffff 892a100d66bffff (5 rows) ``` **Aggregate data by H3 cells** Let's aggregate the pickup data into H3 cells at resolution 8 (average hexagon edge length of ~461 meters) to identify hotspots: ```sql SELECT h3_lat_lng_to_cell(pickup_location, 8) AS h3_index, COUNT(*) AS pickup_count, MIN(pickup_time) AS earliest_pickup, MAX(pickup_time) AS latest_pickup FROM ride_pickups GROUP BY 1 ORDER BY pickup_count DESC; ``` This query groups the dataset by the H3 index, and then provides a count of pickups, as well as the earliest and latest pickup times for each cell. ```text h3_index | pickup_count | earliest_pickup | latest_pickup -----------------+--------------+---------------------+--------------------- 882a100d65fffff | 3 | 2023-06-15 08:30:00 | 2023-06-15 10:00:00 882a100d2dfffff | 2 | 2023-06-15 11:30:00 | 2023-06-15 12:45:00 882a100d69fffff | 2 | 2023-06-15 15:30:00 | 2023-06-15 17:00:00 882a107289fffff | 1 | 2023-06-15 14:00:00 | 2023-06-15 14:00:00 882a1072cbfffff | 1 | 2023-06-15 20:00:00 | 2023-06-15 20:00:00 882a100d67fffff | 1 | 2023-06-15 18:30:00 | 2023-06-15 18:30:00 (6 rows) ``` **Compute neighbour H3 cells** For cells with high demand, you might want to identify neighboring cells to recommend the areas to cover. The `h3_grid_disk` function can be used to fetch neighboring cells within `k` distance from the given cell: ```sql WITH top_cell AS ( SELECT h3_lat_lng_to_cell(pickup_location, 9) AS h3_index, COUNT(*) AS pickup_count FROM ride_pickups GROUP BY 1 ORDER BY pickup_count DESC LIMIT 1 ) SELECT h3_cell_to_lat_lng(neighbor) AS neighbor_centroid FROM top_cell, h3_grid_disk(h3_index, 1) AS neighbor WHERE neighbor != h3_index; ``` This query identifies the hexagon cell for the top pickup location and then fetches the neighboring cells adjacent to it. ```text neighbor_centroid ----------------------------------------- (-73.98431385752089,40.76847107223484) (-73.98634907959108,40.76577167962788) (-73.984106944923,40.7631879413235) (-73.9798298121748,40.76330338643407) (-73.97779433265362,40.766002576302085) (-73.98003624329262,40.7685865237991) (6 rows) ``` ## PostGIS SFCGAL PostGIS SFCGAL provides advanced 2D and 3D spatial operations using the SFCGAL library. It's useful for complex geometric calculations, 3D operations, and working with solid objects. ### Enable the PostGIS SFCGAL extension Enable the extension by running the following SQL statement: ```sql CREATE EXTENSION IF NOT EXISTS postgis_sfcgal CASCADE; ``` ### Example usage We will illustrate the use of SFCGAL to perform some urban planning tasks. **Create a table with building data** ```sql CREATE TABLE buildings ( id SERIAL PRIMARY KEY, name TEXT, height FLOAT, footprint GEOMETRY(POLYGON, 4326) ); -- Insert sample data (simplified for brevity) INSERT INTO buildings (name, height, footprint) VALUES ('Office Tower', 100, ST_GeomFromText('POLYGON((0 0, 0 50, 30 50, 30 0, 0 0))', 4326)), ('Shopping Mall', 20, ST_GeomFromText('POLYGON((100 0, 100 80, 150 80, 150 0, 100 0))', 4326)), ('Residential Block', 45, ST_GeomFromText('POLYGON((200 0, 200 40, 240 40, 240 0, 200 0))', 4326)); ``` This query creates a table to store building footprints and heights. **Use SFCGAL to calculate volumes** We can use SFCGAL to calculate the volume of buildings by extruding their footprints: ```sql SELECT name, height, ST_Area(footprint) AS base_area, ST_Volume(ST_Extrude(footprint, 0, 0, height)) AS volume FROM buildings; ``` This query calculates the volume of each building by extruding its 2D footprint to its height, and then calculating the volume of the resulting 3D object. ```text name | height | base_area | volume -------------------+--------+-----------+-------- Office Tower | 100 | 1500 | 150000 Shopping Mall | 20 | 4000 | 80000 Residential Block | 45 | 1600 | 72000 (3 rows) ``` **Use SFCGAL to perform 3D intersection** SFCGAL can be used to perform 3D intersections. For example, an important urban planning task is to examine how buildings might obstruct views from one another. We can use SFCGAL to create 3D models of our buildings and then check for intersections between these models and sight lines. ```sql WITH building_centroids AS ( SELECT id, name, ST_Centroid(footprint) AS centroid FROM buildings ), sight_lines AS ( SELECT a.id AS id_a, a.name AS name_a, b.id AS id_b, b.name AS name_b, ST_MakeLine(a.centroid, b.centroid) AS sight_line FROM building_centroids a CROSS JOIN building_centroids b WHERE a.id < b.id ) SELECT s.name_a, s.name_b, CASE WHEN EXISTS ( SELECT 1 FROM buildings c WHERE c.id NOT IN (s.id_a, s.id_b) AND ST_3DIntersects( ST_Extrude(c.footprint, 0, 0, c.height), ST_Extrude(s.sight_line, 0, 0, GREATEST( (SELECT height FROM buildings WHERE id = s.id_a), (SELECT height FROM buildings WHERE id = s.id_b) )) ) ) THEN 'Potential view obstruction' ELSE 'Clear view' END AS view_status FROM sight_lines s; ``` This query does the following: 1. It creates 3D models of all buildings using `ST_Extrude`. 2. For each pair of buildings, it creates a line from the center of one building to the center of another, representing a potential sight line. 3. It uses `ST_3DIntersects` to check if this sight line intersects with any 3D building model (other than the buildings at the endpoints of the line). 4. If there's an intersection, it indicates a potential view obstruction. It returns the following output: ```text name_a | name_b | view_status ---------------+-------------------+---------------------------- Office Tower | Shopping Mall | Clear view Office Tower | Residential Block | Potential view obstruction Shopping Mall | Residential Block | Clear view (3 rows) ``` This example demonstrates how SFCGAL's 3D capabilities can be used to analyze spatial relationships between buildings in three dimensions, which is useful for urban planning and architectural design. ## PostGIS Tiger Geocoder PostGIS Tiger Geocoder provides address normalization and geocoding functionality using TIGER (Topologically Integrated Geographic Encoding and Referencing) data. This extension is useful for address validation, normalization, and conversion of addresses to geographic coordinates. ### Enable the PostGIS Tiger Geocoder extension Enable the extension by running the following SQL statement: ```sql CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder CASCADE; ``` ### Example usage **Use Tiger Geocoder to normalize an address** Address normalization is crucial for ensuring consistency in address data. We can use the `normalize_address` function to standardize address formats. ```sql WITH addresses AS ( SELECT '123 Main St, New York, NY 10001' AS address UNION ALL SELECT '1600 Pennsylvania Avenue, Washington, DC' UNION ALL SELECT '100 Universal City Plaza, Universal City, CA 91608' ) SELECT (normalize_address(address)).* FROM addresses; ``` This query returns a normalized version of the input addresses. ```text address | predirabbrev | streetname | streettypeabbrev | postdirabbrev | internal | location | stateabbrev | zip | parsed | zip4 | address_alphanumeric ---------+--------------+----------------+------------------+---------------+----------+----------------+-------------+-------+--------+------+---------------------- 123 | | Main | St | | | New York | NY | 10001 | t | | 123 1600 | | Pennsylvania | Ave | | | Washington | DC | | t | | 1600 100 | | Universal City | Plz | | | Universal City | CA | 91608 | t | | 100 (3 rows) ``` ## Conclusion These examples provide a quick introduction to using other extensions in the PostGIS ecosystem. They can significantly expand the geospatial capabilities of your Neon Postgres database. For further information, refer to the official documentation for each extension. ## Resources - [pgrouting Documentation](https://docs.pgrouting.org/) - [H3 Postgres Reference](https://github.com/zachasme/h3-pg/blob/main/docs/api.md) - [PostGIS SFCGAL Reference](https://postgis.net/docs/manual-dev/reference_sfcgal.html) - [PostGIS Tiger Geocoder Documentation](https://postgis.net/docs/Extras.html#Tiger_Geocoder) --- # Source: https://neon.com/llms/extensions-postgis.txt # The postgis extension > The document details the installation and usage of the PostGIS extension within Neon, enabling spatial database capabilities for handling geographic objects in PostgreSQL databases. ## Source - [The postgis extension HTML](https://neon.com/docs/extensions/postgis): The original HTML version of this documentation The `postgis` extension provides support for spatial data - coordinates, maps and polygons, encompassing geographical and location-based information. It introduces new data types, functions, and operators to manage and analyze spatial data effectively. This guide introduces you to the `postgis` extension - how to enable it, store and query spatial data, and perform geospatial analysis with real-world examples. Geospatial data is crucial in fields like urban planning, environmental science, and logistics. **Note**: PostGIS is an open-source extension for Postgres that can be installed on any Neon Project using the instructions below. Detailed installation instructions and compatibility information can be found at [PostGIS Documentation](https://postgis.net/documentation/). For information about PostGIS-related extensions, including `pgrouting`, H3_PostGIS, PostGIS SFCGAL, and PostGIS Tiger Geocoder, see [PostGIG-related extensions](https://neon.com/docs/extensions/postgis-related-extensions). **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date information. ## Enable the `postgis` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION IF NOT EXISTS postgis; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Example usage **Create a table with spatial data** Suppose you're managing a city's public transportation system. You can create a table to store the locations of bus stops. ```sql CREATE TABLE bus_stops ( id SERIAL PRIMARY KEY, name VARCHAR(255), location GEOGRAPHY(Point) ); ``` Here, the location column is of type `GEOGRAPHY(Point)`, which is a spatial data type provided by the `postgis` extension and used to store points on the Earth's surface. **Inserting data** Data can be inserted into the table using regular `INSERT` statements. ```sql INSERT INTO bus_stops (name, location) VALUES ('Main St & 3rd Ave', ST_Point(-73.935242, 40.730610)), ('Elm St & 5th Ave', ST_Point(-73.991070, 40.730824)); ``` The `ST_Point` function is used to create a point from the specified longitude and latitude. **Querying spatial data** Now, we can perform spatial queries using the built-in functions provided by `PostGIS`. For example, below we try to find points within a certain distance from a reference. Query: ```sql SELECT name FROM bus_stops WHERE ST_DWithin(location, ST_Point(-73.95, 40.7305)::GEOGRAPHY, 2000); ``` This query returns the following: ```text | name | |--------------------| | Main St & 3rd Ave | ``` The `ST_DWithin` function returns true if the distance between two points is less than or equal to the specified distance (when used with the `GEOGRAPHY` type, the unit is meters). ## Spatial data types PostGIS extends Postgres data types to handle spatial data. The primary spatial types are: - **GEOMETRY**: A flexible type for spatial data, supporting various shapes. It models shapes in the cartesian coordinate plane. Each `GEOMETRY` value is also associated with a spatial reference system (SRS), which defines the coordinate system and units of measurement. - **GEOGRAPHY**: Specifically designed for large-scale spatial operations on the Earth's surface, factoring in the Earth's curvature. The coordinates for a `GEOGRAPHY` shape are specified in degrees of longitude and latitude. The actual shapes are stored as a set of coordinates. For example, a point is stored as a pair of coordinates, a line as a set of points, and a polygon as a set of lines. ## Longer example PostGIS provides a number of other functions for spatial analysis - area, distance, intersection, and more. To illustrate, we'll create dataset representing a small set of landmarks and roads in a fictional city and run spatial queries on it. **Creating the test dataset** ```sql CREATE TABLE landmarks ( id SERIAL PRIMARY KEY, name VARCHAR(255), location GEOMETRY(Point) ); CREATE TABLE roads ( id SERIAL PRIMARY KEY, name VARCHAR(255), path GEOMETRY(LineString) ); INSERT INTO landmarks (name, location) VALUES ('Park', ST_Point(100, 200)), ('Museum', ST_Point(200, 300)), ('Library', ST_Point(300, 200)); INSERT INTO roads (name, path) VALUES ('Main Street', ST_MakeLine(ST_Point(100, 200), ST_Point(200, 300))), ('Second Street', ST_MakeLine(ST_Point(200, 300), ST_Point(300, 200))); ``` **Nearest landmark to a given point** Finding the nearest places to a given point is a common spatial query. We can use the `ST_Distance` function to find the distance between two points and order the results by distance. ```sql SELECT name, ST_Distance(location, ST_GeomFromText('POINT(150 250)')) AS distance FROM landmarks ORDER BY distance LIMIT 1; ``` This query returns the following: ```text | name | distance | |--------|----------| | Park | 70.7107 | ``` **Intersection of Roads** We can use the `ST_Intersects` function to find if two roads intersect. To ensure we don't get duplicate pairs of roads, we filter out pairs where the first road has a higher `id` than the second road. ```sql SELECT a.name, b.name FROM roads a AS name_A, roads b AS name_B WHERE a.id < b.id AND ST_Intersects(a.path, b.path); ``` This query returns the following: ```text | name_A | name_B | |----------------|----------------| | Main Street | Second Street | ``` **Buffer zone around a landmark** Say, the municipal council wants to create a buffer zone of 50 units around landmarks and check which roads intersect these zones. `ST_Buffer` computes an area around the given point with the specified radius. ```sql SELECT l.name AS landmark, r.name AS road FROM landmarks l, roads r WHERE ST_Intersects(r.path, ST_Buffer(l.location, 50)); ``` This query returns the following: ```text | landmark | road | |----------|---------------| | Park | Main Street | | Museum | Main Street | | Museum | Second Street | | Library | Second Street | ``` **Line of Sight Between Landmarks** To check if there's a direct line of sight (no roads intersecting) between two landmarks, we can combine two `postgis` functions. ```sql SELECT 'No direct line of sight' AS info FROM landmarks l1, landmarks l2, roads r WHERE l1.name = 'Park' AND l2.name = 'Library' AND ST_Intersects(ST_MakeLine(l1.location, l2.location), r.path) LIMIT 1; ``` This query returns the following: ```text | info | |--------------------------| | No direct line of sight | ``` This tells us there's no direct line of sight between the Park and the Library. ## Performance considerations When working with PostGIS, thinking about performance is crucial, especially when dealing with large datasets or complex spatial queries. ### Indexing **GIST** (Generalized Search Tree) is the default spatial index in PostGIS. GiST indexes are well-suited for multidimensional data, like points, lines, and polygons. It can significantly improve query performance, especially for spatial search operations and joins. ```sql CREATE INDEX spatial_index_name ON landmarks USING GIST(location); ``` ### Query optimization - **Unnecessary Casting**: `GEOMETRY` and `GEOGRAPHY` are the two primary data types in `postgis`, and a lot of functions are overloaded to work with both. However, casting between the two types can be expensive, so it's best to store data in the more frequently used type. - **Use Appropriate Precision**: Reducing the precision of coordinates can often improve performance without significantly impacting the results. ## Conclusion These examples provide a quick introduction to handling and analyzing spatial data in PostgresQL. We saw how to create tables with spatial data, insert data, and perform spatial queries using the `postgis` extension. It offers a powerful set of tools, with functions for calculating distances, identifying spatial relationships, and aggregating spatial data. ## Resources - [PostGIS Documentation](https://postgis.net/documentation) - [PostGIS Intro Workshop](https://postgis.net/workshops/postgis-intro/) --- # Source: https://neon.com/llms/extensions-postgres_fdw.txt # The postgres_fdw extension > The document explains how to use the postgres_fdw extension in Neon to enable foreign data wrapper functionality, allowing users to access and manipulate data stored in external PostgreSQL databases. ## Source - [The postgres_fdw extension HTML](https://neon.com/docs/extensions/postgres_fdw): The original HTML version of this documentation The `postgres_fdw` (Foreign Data Wrapper) extension provides a powerful and standards-compliant way to access data stored in external Postgres databases from your Neon project. For compliance or regulatory reasons, you might need to keep sensitive data on-premises or within a specific jurisdiction; `postgres_fdw` lets you query this data directly from your Neon database without migrating it, maintaining data residency. This enables you to leverage Neon's features while adhering to data storage policies. This simplifies data integration, enables cross-database querying, and allows you to build applications that seamlessly interact with data across different Postgres deployments. This guide will walk you through the essentials of using the `postgres_fdw` extension in Neon. You'll learn how to enable the extension, establish connections to remote Postgres servers, define foreign tables that map to tables on those servers, and execute queries that span across your Neon database and remote instances. We will also cover important considerations for performance and security when working with `postgres_fdw`. **Note**: `postgres_fdw` is a core Postgres extension that can be installed on any Neon project using the instructions below. It provides a standardized way to access external Postgres databases and is widely used for data integration and cross-database querying. **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Enable the `postgres_fdw` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS postgres_fdw; ``` ## Key concepts Before diving into the practical steps, let's understand the key components involved in using `postgres_fdw`: - **Foreign server:** Represents the connection details to the external Postgres server. This includes information like the host, port, and database name of the remote server. - **User mapping:** Defines the authentication credentials used to connect to the foreign server. This maps a local Neon user to a user on the remote server. - **Foreign table:** A locally defined object in your Neon database that represents a table located on the foreign server. Queries against the foreign table are transparently executed on the remote server. ## Connecting to a remote Postgres database The process of connecting to a remote Postgres database involves two main steps: creating a foreign server and setting up a user mapping. ### Create a foreign server The `CREATE SERVER` command is used to define the connection parameters for the remote Postgres server. ```sql CREATE SERVER my_remote_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '', port '', dbname ''); ``` **Important**: When setting up `postgres_fdw` with a Neon database as the foreign server, make sure to use the hostname from an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string). Pooled connection strings will result in connection errors. You can find the unpooled connection string in your project dashboard by clicking the **Connect** button and ensuring the **Connection pooling** toggle is disabled. Replace the placeholders with the actual details of your remote Postgres server: - ``: The hostname or IP address of the remote server. - ``: The port number the remote Postgres server is listening on (usually 5432). - ``: The name of the database on the remote server you want to access. For example: ```sql CREATE SERVER production_db FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'db.example.com', port '5432', dbname 'analytics'); ``` ### Create a user mapping The `CREATE USER MAPPING` command specifies the credentials to use when connecting to the foreign server. This maps a user in your Neon database to a user on the remote server. ```sql CREATE USER MAPPING FOR SERVER my_remote_server OPTIONS (user '', password ''); ``` Replace the placeholders with the appropriate values: - ``: The username of the user in your Neon database that will be accessing the foreign server. Use `PUBLIC` if you want to allow all users to access the foreign server with the same credentials. - `my_remote_server`: The name of the foreign server you created in the previous step. - ``: The username on the remote Postgres server. - ``: The password for the remote user. For example, to map the current Neon user to the `read_only_user` user on the `production_db` server: ```sql CREATE USER MAPPING FOR CURRENT_USER SERVER production_db OPTIONS (user 'read_only_user', password 'secure_password'); ``` ## Defining foreign tables Once the connection to the remote server is established, you need to define foreign tables in your Neon database that correspond to the tables you want to access on the remote server. There are two primary ways to do this: creating foreign tables manually or importing the schema. ### Create foreign tables manually The `CREATE FOREIGN TABLE` command allows you to explicitly define the structure of a remote table in your Neon database. You need to specify the column names and data types, which must match the remote table's schema. ```sql CREATE FOREIGN TABLE ( , , ... ) SERVER my_remote_server OPTIONS (schema_name '', table_name ''); ``` Replace the placeholders with the appropriate details: - ``: The name you want to give the foreign table in your Neon database. - `` and ``: The names and data types of the columns, matching the remote table. - `my_remote_server`: The name of the foreign server you created. - ``: The schema name where the table resides on the remote server (often `public`). - ``: The name of the table on the remote server. For example, to create a foreign table named `remote_users` that maps to the `users` table in the `public` schema of the `production_db` server: ```sql CREATE FOREIGN TABLE remote_users ( id integer, username text, email text, created_at timestamp with time zone ) SERVER production_db OPTIONS (schema_name 'public', table_name 'users'); ``` ### Import foreign schema The `IMPORT FOREIGN SCHEMA` command provides a convenient way to automatically create foreign tables for all or a subset of tables within a schema on the remote server. ```sql IMPORT FOREIGN SCHEMA FROM SERVER my_remote_server INTO ; ``` - ``: The name of the schema on the remote server you want to import. - `my_remote_server`: The name of the foreign server. - ``: The schema in your Neon database where the foreign tables will be created. If the schema doesn't exist, you'll need to create it first. For example, to import all tables from the `analytics` schema of the `production_db` server into a local schema named `imported_data`: ```sql CREATE SCHEMA IF NOT EXISTS imported_data; IMPORT FOREIGN SCHEMA analytics FROM SERVER production_db INTO imported_data; ``` You can also selectively import tables using the `LIMIT TO` or `EXCEPT` clauses: **Import specific tables:** ```sql IMPORT FOREIGN SCHEMA analytics LIMIT TO (users, products) FROM SERVER production_db INTO imported_data; ``` **Import all tables except specific ones:** ```sql IMPORT FOREIGN SCHEMA analytics EXCEPT (staging_table, temporary_data) FROM SERVER production_db INTO imported_data; ``` ## Querying foreign tables Once foreign tables are defined, you can query them using standard SQL `SELECT` statements, just like regular local tables. The `postgres_fdw` extension handles the communication with the remote server and retrieves the data transparently. To select all the users from the `remote_users` table: ```sql SELECT * FROM remote_users WHERE created_at > NOW() - INTERVAL '1 week'; ``` You can perform joins between local tables and foreign tables, aggregate data from remote sources, and use any other SQL features supported by Postgres. ```sql SELECT r.username, o.order_id, o.order_date FROM remote_users r JOIN imported_data.orders o ON r.id = o.user_id WHERE o.order_date > '2025-01-01'; ``` ## Modifying data in foreign tables `postgres_fdw` also supports data modification operations on foreign tables, including `INSERT`, `UPDATE`, and `DELETE`. However, it's important to understand the limitations and potential performance implications. **Inserting Data:** ```sql INSERT INTO remote_users (id, username, email) VALUES (101, 'newuser', 'new@example.com'); ``` **Updating Data:** ```sql UPDATE remote_users SET email = 'updated@example.com' WHERE id = 101; ``` **Deleting Data:** ```sql DELETE FROM remote_users WHERE id = 101; ``` **Note**: `postgres_fdw` currently lacks full support for `INSERT` statements with an `ON CONFLICT DO UPDATE` clause. However, the `ON CONFLICT DO NOTHING` clause is supported. ## Optimizing queries with `postgres_fdw` Querying foreign tables can sometimes be slower than querying local tables due to network latency and the overhead of communicating with the remote server. Here are some strategies to optimize performance: - **`use_remote_estimate`:** You can instruct `postgres_fdw` to request cost estimates from the remote server. This can help the query planner make better decisions, especially for complex queries. Set this option at the server or table level: ```sql ALTER SERVER production_db OPTIONS (ADD use_remote_estimate 'true'); ALTER FOREIGN TABLE remote_users OPTIONS (ADD use_remote_estimate 'true'); ``` - **`ANALYZE` Foreign Tables:** Running `ANALYZE` on foreign tables collects statistics about the remote data and stores them locally. This helps the query planner generate more efficient execution plans. However, remember that these statistics can become stale if the remote data changes frequently. ```sql ANALYZE remote_users; ``` - **Materialized Views:** For frequently accessed data from foreign tables, consider creating materialized views in your Neon database. Materialized views store a snapshot of the remote data locally, which can significantly improve query performance. You can refresh materialized views periodically to keep the data relatively up-to-date. ```sql CREATE MATERIALIZED VIEW local_users_snapshot AS SELECT * FROM remote_users WHERE created_at > NOW() - INTERVAL '1 month'; REFRESH MATERIALIZED VIEW local_users_snapshot; ``` - **Filtering and Projections:** When querying foreign tables, try to apply filters (`WHERE` clause) and select only the necessary columns to reduce the amount of data transferred over the network. ## Advanced `postgres_fdw` functions The `postgres_fdw` extension provides several utility functions to manage connections established with remote Postgres servers. These functions allow you to monitor active connections and explicitly disconnect from foreign servers. - **`postgres_fdw_get_connections()`:** This function provides insights into the active connections established by `postgres_fdw` from your current Neon session to remote servers. It returns a set of records, with each record containing the foreign server name and a boolean indicating the validity of the connection. A connection is considered invalid if the foreign server or user mapping associated with it has been changed or dropped while the connection is being used in the current transaction. Invalid connections will be closed at the end of the transaction. ```sql SELECT * FROM postgres_fdw_get_connections() ORDER BY server_name; ``` The output will resemble: ```text server_name | valid -------------+------- production_db | t staging_db | f (2 rows) ``` In this example, there are two open connections. The connection to `production_db` is valid (`t`), while the connection to `staging_db` is invalid (`f`), likely due to a change on the remote server or user mapping. - **`postgres_fdw_disconnect(server_name text)`:** This function allows you to explicitly close open connections established by `postgres_fdw` to a specific foreign server. It takes the name of the foreign server as an argument. Note that if there are multiple connections to the same server using different user mappings, this function will attempt to disconnect all of them. If any of the connections to the specified server are currently in use within the ongoing transaction, they will not be disconnected, and warning messages will be issued. The function returns `true` if at least one connection was successfully disconnected and `false` otherwise. An error is raised if no foreign server with the given name is found. ```sql SELECT postgres_fdw_disconnect('staging_db'); ``` The output will be: ```text postgres_fdw_disconnect ------------------------- t (1 row) ``` - **`postgres_fdw_disconnect_all()`:** This function provides a way to close all open connections established by `postgres_fdw` from your current Neon session to any foreign server. Similar to `postgres_fdw_disconnect`, connections in use within the current transaction will not be closed, and warnings will be generated. The function returns `true` if at least one connection was disconnected and `false` otherwise. These functions offer greater control over `postgres_fdw` connections, allowing you to manage resources and ensure connections are closed when no longer needed. Using `postgres_fdw_get_connections` can be helpful for monitoring and troubleshooting connection issues, while the disconnect functions can be used for cleanup or in scenarios where you need to force a reconnection with updated credentials or server configurations. ## Security considerations When working with `postgres_fdw`, security is paramount. Keep the following points in mind: - **Network security:** Ensure that network access is properly configured to allow connections between your Neon project and the remote Postgres server. Firewalls and security groups might need adjustments. - **Principle of Least Privilege:** Grant only the necessary permissions to the user mapped to the remote database. Avoid using superuser accounts for `postgres_fdw` connections. - **SSL encryption:** Ensure that the connection to the remote Postgres server is encrypted using SSL. This is often the default behavior for Postgres connections, but it's worth verifying the configuration. ## `postgres_fdw` vs. `dblink` While both `postgres_fdw` and `dblink` allow you to connect to remote Postgres databases, `postgres_fdw` is generally the preferred choice for the following reasons: - **SQL standards compliance:** `postgres_fdw` adheres more closely to SQL standards for accessing external data. - **Performance:** `postgres_fdw` often provides better performance due to its more efficient implementation. - **Feature set:** `postgres_fdw` offers a richer feature set, including support for data modification operations and more sophisticated query planning. - **Maintainability:** Using a standardized approach like `postgres_fdw` can lead to more maintainable and portable code. `dblink` might be suitable for simple, one-off tasks, but for robust and scalable integration with remote Postgres databases, `postgres_fdw` is the recommended solution. ## Conclusion The `postgres_fdw` extension is a valuable tool for Neon users who need to access and integrate data from remote Postgres databases. By establishing connections to foreign servers, defining foreign tables, and executing queries that span across local and remote databases, you can build powerful applications that leverage data from multiple sources seamlessly. ## Reference - [PostgreSQL Foreign Data Wrappers](https://www.postgresql.org/docs/current/postgres-fdw.html) - [PostgreSQL `dblink` Documentation](https://www.postgresql.org/docs/current/dblink.html) --- # Source: https://neon.com/llms/extensions-postgresql-anonymizer.txt # The anon extension > The document details the integration and usage of the anon extension in Neon, enabling PostgreSQL databases to anonymize sensitive data efficiently. ## Source - [The anon extension HTML](https://neon.com/docs/extensions/postgresql-anonymizer): The original HTML version of this documentation The `anon` extension ([PostgreSQL Anonymizer](https://postgresql-anonymizer.readthedocs.io)) provides data masking and anonymization capabilities to protect sensitive data in Postgres databases. It helps protect personally identifiable information (PII) and other sensitive data, facilitating compliance with regulations such as [GDPR](https://gdpr-info.eu/). **Note**: This extension comes from the [PostgreSQL Anonymizer](https://postgresql-anonymizer.readthedocs.io) open source project (`postgresql_anonymizer`). This is distinct from other tools such as `pg_anon`. The extension is installed using `CREATE EXTENSION anon`. **Tip** Looking for a practical guide?: For complete step-by-step workflows on anonymizing data in Neon branches, including manual procedures and GitHub Actions automation, see [data anonymization](https://neon.com/docs/workflows/data-anonymization). ## Enable the extension **Note**: This extension is currently [experimental](https://neon.com/docs/extensions/pg-extensions#experimental-extensions) and may change in future releases. When using the Neon Console or API for anonymization workflows, the extension is enabled automatically. It can also be enabled manually using SQL commands. ### Enable via SQL When working with SQL-based workflows (such as using `psql` or other SQL clients), enable the `anon` extension in your Neon database by following these steps: 1. Connect to your Neon database using either the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or an SQL client like [psql](https://neon.com/docs/connect/query-with-psql-editor) 2. Enable experimental extensions: ```sql SET neon.allow_unstable_extensions='true'; ``` 3. Install the extension: ```sql CREATE EXTENSION IF NOT EXISTS anon; ``` **Tip**: When using the Neon Console or API to create branches, the extension is enabled automatically. See the [data anonymization workflow guide](https://neon.com/docs/workflows/data-anonymization) for details. ## Masking rules Masking rules define which data to mask and how to mask it using SQL syntax. These rules are applied using `SECURITY LABEL` SQL commands and stored within the database schema to implement the privacy by design principle. ## Masking functions PostgreSQL Anonymizer provides [built-in functions](https://postgresql-anonymizer.readthedocs.io/en/latest/masking_functions/) for different anonymization requirements, including but not limited to: | Function Type | Description | Example | | ------------------ | ----------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | Faking | Generate realistic data | `anon.fake_first_name()` and `anon.lorem_ipsum()` | | Pseudonymization | Create consistent and reversible fake data | `anon.pseudo_email(seed)` | | Randomization | Generate random values | `anon.random_int_between(10, 100)` and `anon.random_in_enum(enum_column)` | | Partial scrambling | Hide portions of strings | `anon.partial(ip_address, 8, ''XXX.XXX'', 0)` would change `192.168.1.100` to `192.168.XXX.XXX` | | Nullification | Replace with static values or `NULL` | `MASKED WITH VALUE 'CONFIDENTIAL'` | | Noise addition | Alter numerical values while maintaining distribution | `anon.noise(salary, 0.1)` adds `+/- 10%` noise to the `salary` column | | Generalization | Replace specific values with broader categories | `anon.generalize_int4range(age, 10)` would change `54` to `[50,60)` | ## Static masking Static masking permanently modifies the original data in your tables. This approach is useful for creating anonymized copies of data when: - Migrating production data to development branches - Creating sanitized datasets for testing - Archiving data with sensitive information removed - Distributing data to third parties ### Branch operations and static masking When using Neon's branch features with static masking: - Creating a child branch copies all data as-is from the parent - Resetting a branch from the parent replaces all branch data with the parent's current state - In both cases, any previous anonymization is lost and must be reapplied ## Practical examples For complete implementation examples showing how to apply these masking functions in real workflows, see the [data anonymization guide](https://neon.com/docs/workflows/data-anonymization), which covers: - Creating and anonymizing development branches - Applying different masking strategies to protect sensitive data - Automating anonymization with GitHub Actions - Best practices and safety tips ## Limitations - Neon currently only supports static masking with this extension - With static masking, branch reset operations restore original data, requiring anonymization to be run again - Additional `pg_catalog` functions cannot be declared as `TRUSTED` in Neon's implementation ## Conclusion This extension provides a toolkit for protecting sensitive data in Postgres databases. By defining appropriate masking rules, you can create anonymized datasets that maintain usability while protecting individual privacy. ## Reference - [Data anonymization workflow guide](https://neon.com/docs/workflows/data-anonymization) - Practical guide for anonymizing data in Neon branches - [PostgreSQL Anonymizer Repository](https://gitlab.com/dalibo/postgresql_anonymizer) - [Official Documentation](https://postgresql-anonymizer.readthedocs.io/en/latest/) - [Masking Functions Reference](https://postgresql-anonymizer.readthedocs.io/en/latest/masking_functions/) --- # Source: https://neon.com/llms/extensions-tablefunc.txt # The tablefunc extension > The document details the installation and usage of the tablefunc extension in Neon, enabling users to perform advanced table transformations such as pivoting and crosstab operations within their databases. ## Source - [The tablefunc extension HTML](https://neon.com/docs/extensions/tablefunc): The original HTML version of this documentation The `tablefunc` extension for Postgres provides a powerful set of functions for transforming data directly within your database. Its primary capabilities include creating pivot tables (also known as cross-tabulations) to reshape data, generating sets of normally distributed random numbers, and querying hierarchical or tree-like data structures. For instance, you can use `tablefunc` to transform a list of quarterly product sales into a summary table where each product is a row and each quarter is a column. Or, you could explore an employee reporting structure to visualize an organization chart. ## Enable the `tablefunc` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS tablefunc; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Key functions and usage The `tablefunc` extension provides the following key functions: 1. **`normal_rand()`**: Generates a series of random numbers following a normal (Gaussian) distribution. 2. **`crosstab()`**: Transforms data from a "long" format to a "wide" format, creating pivot tables. 3. **`connectby()`**: Traverses hierarchical data, such as organizational charts or bill-of-materials structures. Let's explore each function in detail. ### `normal_rand()` The `normal_rand()` function is useful for creating sample datasets that mimic real-world measurements often clustering around an average value (mean) with a certain spread (standard deviation). **Function signature:** ```sql normal_rand(count INTEGER, mean FLOAT8, stddev FLOAT8) RETURNS SETOF FLOAT8 ``` - `count`: The number of random values to generate. - `mean`: The central value (average) of the distribution. - `stddev`: The standard deviation, indicating the spread of the numbers. **Example:** To generate 5 random numbers with a mean of 10.0 and a standard deviation of 2.0: ```sql SELECT * FROM normal_rand(5, 10.0, 2.0); ``` **Example output:** ``` normal_rand ------------- 9.32020692360359 11.495399206878934 7.738467056884886 9.672348520651616 7.734973342540705 (5 rows) ``` > Ouput will vary each time you run the function due to the random nature of the data. **Use case:** Populating tables with realistic-looking sample data for testing or analysis. ### `crosstab()` The `crosstab()` function is used for reshaping data, particularly for creating pivot tables. It allows you to summarize and reorganize data by transforming rows into columns, making it easier to analyze and visualize. #### Basic `crosstab()` (single SQL argument) This version of `crosstab` takes a single SQL query string as input. This query must produce exactly three columns: row identifier, category, and value. Consider a `product_sales_long` table: | product | quarter | sales | | :------ | :------ | :---- | | Apple | Q1 | 100 | | Apple | Q2 | 120 | | Banana | Q1 | 80 | | Apple | Q3 | 110 | | Banana | Q2 | 95 | We want to transform it into: | product | Q1_sales | Q2_sales | Q3_sales | | :------ | :------- | :------- | :------- | | Apple | 100 | 120 | 110 | | Banana | 80 | 95 | (null) | **Query:** ```sql CREATE TABLE product_sales_long ( product TEXT, quarter TEXT, sales INT ); INSERT INTO product_sales_long (product, quarter, sales) VALUES ('Apple', 'Q1', 100), ('Apple', 'Q2', 120), ('Banana', 'Q1', 80), ('Apple', 'Q3', 110), ('Banana', 'Q2', 95); -- Using crosstab to pivot the product_sales_long table SELECT * FROM crosstab( 'SELECT product, quarter, sales FROM product_sales_long ORDER BY 1, 2' ) AS ct(product TEXT, Q1_sales INT, Q2_sales INT, Q3_sales INT); ``` **Breaking down the query:** 1. **`crosstab('source_sql_query_as_string')`**: The `source_sql_query_as_string` must return three columns: - **Row identifier**: Values in this column become distinct rows in the output (e.g., `product`). - **Category**: Values in this column become new column headers in the output (e.g., `quarter`). - **Value**: Values in this column populate the cells of the new pivot table (e.g., `sales`). Crucially, this source query **must** be sorted by the first column, then the second (`ORDER BY 1, 2`). This ensures `crosstab` processes data correctly (e.g., `Q1` comes before `Q2`). 2. **`AS ct(column_definitions)`**: - Because `crosstab` returns a generic `SETOF record`, you must explicitly define the structure of the output table. - `ct`: An alias for the resulting table. - `product TEXT`: Corresponds to the first column of the `source_sql_query`. Its data type should match. - `Q1_sales INT, Q2_sales INT, Q3_sales INT`: These are the new columns derived from the unique values in the 'category' (second) column of your `source_sql_query`. Their data types must match the 'value' (third) column of the `source_sql_query`. - If a row identifier/category combination doesn't exist in the source data (e.g., Banana for `Q3`), the corresponding cell in the pivot table will be `NULL`. - If the source data contains categories not defined in the `AS ct(...)` clause, those categories will be ignored. #### `crosstab()` with fixed columns (using two SQL queries) This version of `crosstab` is used when you know exactly which categories you want as your new columns, and you want them to appear in a specific order. It's perfect for reports where the column layout is fixed, even if some rows don't have data for every column. Imagine you have a table of `student_test_scores`: | student_name | subject | score | | :----------- | :------ | :---- | | Alice | Math | 90 | | Alice | Science | 85 | | Bob | Math | 78 | | Alice | English | 92 | | Bob | Science | 88 | | Carol | Math | 95 | | Carol | English | 89 | We want to transform this into a table where each student is a row, and their scores for 'Math', 'Science', 'English', and 'History' are in separate columns. **Desired Output:** | student_name | math_score | science_score | english_score | history_score | | :----------- | :--------- | :------------ | :------------ | :------------ | | Alice | 90 | 85 | 92 | _(null)_ | | Bob | 78 | 88 | _(null)_ | _(null)_ | | Carol | 95 | _(null)_ | 89 | _(null)_ | > Notice we want a 'History' column even if no one has a score for it yet – it will just show `(null)`. **Here's how we do it with `crosstab()`:** ```sql -- Create the student_test_scores table CREATE TABLE student_test_scores ( student_name TEXT, subject TEXT, score INT ); INSERT INTO student_test_scores (student_name, subject, score) VALUES ('Alice', 'Math', 90), ('Alice', 'Science', 85), ('Bob', 'Math', 78), ('Alice', 'English', 92), ('Bob', 'Science', 88), ('Carol', 'Math', 95), ('Carol', 'English', 89); -- Now, the crosstab query SELECT * FROM crosstab( -- Query 1: This is our source data. -- It needs: row_identifier, category_for_new_columns, value_for_cells 'SELECT student_name, subject, score FROM student_test_scores ORDER BY 1', -- IMPORTANT: Order by the row_identifier (student_name) -- Query 2: This query defines our new column headers, in the order we want them. -- It must return one column with the list of categories. $$SELECT s FROM unnest(ARRAY['Math', 'Science', 'English', 'History']) AS s$$ ) AS ct( student TEXT, -- This matches 'student_name' from Query 1 math_score INT, -- This matches 'Math' from Query 2 science_score INT, -- This matches 'Science' from Query 2 english_score INT, -- This matches 'English' from Query 2 history_score INT -- This matches 'History' from Query 2 ); ``` > `unnest()` is a Postgres function that expands an array into a set of rows. In this case, it generates the list of subjects to be used as column headers in the pivot table. The `ARRAY[...]` syntax creates an array of the specified values, and `unnest()` converts it into a set of rows. This allows you to dynamically define the categories for the pivot table based on the contents of the array. Learn more about the `unnest()` function here: [Expanding an array into rows](https://neon.com/docs/data-types/array#array-functions-and-operators:~:text=Expanding%20an%20array%20into%20rows) **How the `crosstab(source_sql, category_sql)` works:** 1. **`source_sql` (the first query string):** - This query fetches your raw data. - It must provide: 1. The column(s) that will identify each row in your final table (here, `student_name`). 2. The column whose values will become your new column headers (here, `subject`). 3. The column whose values will fill the cells of your new table (here, `score`). - It's very important to `ORDER BY` the row identifier column(s) (e.g., `ORDER BY student_name` or `ORDER BY 1`). 2. **`category_sql` (the second query string):** - This query's job is to produce a single column containing the exact list of categories you want as your new column headers. - The order of categories returned by this query determines the order of your new columns in the final pivot table. - In our example, `$$SELECT s FROM unnest(ARRAY['Math', 'Science', 'English', 'History']) AS s$$` provides the list: 'Math', then 'Science', then 'English', then 'History'. 3. **`AS ct(student TEXT, math_score INT, ...)`:** - This part defines the structure of your final output table. - The first column(s) here (`student TEXT`) must match the type and number of your row identifier columns from `source_sql`. - The following columns (`math_score INT`, `science_score INT`, etc.) must match, in order, the categories produced by `category_sql`. Their data type should match the `value` column from `source_sql` (the `score` column, which is `INT`). This two-argument version of `crosstab` is powerful because it guarantees your output table will always have the columns 'Math', 'Science', 'English', and 'History' in that order, filling in `(null)` where a student doesn't have a score for a particular subject. #### `crosstabN()` functions For common scenarios where the row identifier is text and you need a fixed number of text value columns (2, 3, or 4), `tablefunc` offers `crosstab2()`, `crosstab3()`, and `crosstab4()`. These are simplified wrappers around the main `crosstab` function, providing predefined output structures for common text-based pivot tables, saving you from writing the full `AS (...)` definition. These functions are most useful when your source query provides a text row identifier, text categories, and text values (or values castable to text). The `crosstabN` function then produces an output table with a `row_name TEXT` column and `N` additional `category_X TEXT` columns. For instance, if you use `crosstab3()`, the output table structure will implicitly be: `(row_name TEXT, category_1 TEXT, category_2 TEXT, category_3 TEXT)` No explicit `AS (...)` clause is needed. Remember that the source SQL query provided to `crosstabN` must still: 1. Return three columns: `row_identifier`, `category`, `value`. 2. Be sorted using `ORDER BY 1, 2`. 3. The `value` column (third column of the source query) should be `TEXT` or cast to `TEXT`, as it populates the `category_X TEXT` output columns. The `row_identifier` (first column) also populates the `row_name TEXT` output column. **Example using `crosstab3()`:** Let's use our `product_sales_long` table again: | product | quarter | sales | | :------ | :------ | :---- | | Apple | Q1 | 100 | | Apple | Q2 | 120 | | Banana | Q1 | 80 | | Apple | Q3 | 110 | | Banana | Q2 | 95 | To pivot this using `crosstab3()`, ensuring sales are treated as text for the output: ```sql SELECT * FROM crosstab3( $$SELECT product, quarter, sales::TEXT -- Cast sales to TEXT FROM product_sales_long ORDER BY 1, 2$$ -- Important: ORDER BY row_id, category ); ``` **Expected Output:** The output columns will be `row_name`, `category_1`, `category_2`, and `category_3`. The values from the `quarter` column (`Q1`, `Q2`, `Q3` in sorted order) will determine which `category_X` column receives the sales data. | row_name | category_1 | category_2 | category_3 | | :------- | :--------- | :--------- | :--------- | | Apple | 100 | 120 | 110 | | Banana | 80 | 95 | (null) | **Explanation:** - `crosstab3` automatically defines the output columns as `row_name TEXT`, `category_1 TEXT`, `category_2 TEXT`, and `category_3 TEXT`. - The `product` column from the source query populates the `row_name` output column. The sorted `quarter` values (`Q1`, `Q2`, `Q3`) correspond to `category_1`, `category_2`, and `category_3` respectively. - The `ORDER BY 1, 2` clause in the source query is essential for correct processing and mapping of quarter data to category columns. - `Banana` has sales data only for `Q1` and `Q2`, so its third value column (`category_3`, corresponding to `Q3` for Banana if it existed) is `NULL`. ### `connectby()` The `connectby()` function is designed to traverse tree-like or hierarchical data structures, such as product category trees, organizational charts, or bill-of-materials. Consider a `product_categories` table that defines a hierarchy of product categories: | category_id | category_name | parent_category_id | | :---------- | :------------ | :----------------- | | 1 | Electronics | NULL | | 2 | Computers | 1 | | 3 | Laptops | 2 | | 4 | Desktops | 2 | | 5 | Phones | 1 | | 6 | Smartphones | 5 | | 7 | Books | NULL | | 8 | Fiction | 7 | We want to display the hierarchy starting from the 'Electronics' category (ID 1). **Query:** ```sql CREATE TABLE product_categories ( category_id INT PRIMARY KEY, category_name TEXT, parent_category_id INT ); INSERT INTO product_categories (category_id, category_name, parent_category_id) VALUES (1, 'Electronics', NULL), (2, 'Computers', 1), (3, 'Laptops', 2), (4, 'Desktops', 2), (5, 'Phones', 1), (6, 'Smartphones', 5), (7, 'Books', NULL), (8, 'Fiction', 7); -- Using connectby to traverse the product category hierarchy SELECT * FROM connectby( 'product_categories', -- 1. Table name 'category_id', -- 2. Key field column name 'parent_category_id', -- 3. Parent key field column name '1', -- 4. Start row's key value (e.g., 'Electronics' category_id) 0, -- 5. Maximum depth (0 for all levels) '>' -- 6. Branch delimiter string for the branch_path ) AS t( current_category_id INT, -- Output: Current item's key field parent_id INT, -- Output: Parent item's key field level INT, -- Output: Depth in the hierarchy (0 for start_with row) branch_path TEXT -- Output: Text path from root to current item ); ``` **How `connectby()` works:** - **Parameters:** 1. `table_name TEXT`: Name of the table containing the hierarchy. 2. `key_field TEXT`: Name of the column storing the unique ID for each item. 3. `parent_key_field TEXT`: Name of the column storing the ID of the parent item. 4. `start_with_value TEXT`: The `key_field` value of the item from which to start the traversal (must be provided as text). 5. `max_depth INTEGER`: Maximum number of levels to traverse (0 means no limit). 6. `branch_delimiter TEXT`: A string used to construct the `branch_path` output column. - **Output Definition `AS t(...)`**: You must define the structure of the output table: - `key_field_alias `: The key of the current item. Its data type should match the `key_field` in the source table (e.g., `current_category_id INT`). - `parent_key_field_alias `: The key of the parent item. Its data type should match the `parent_key_field` (or `key_field`) in the source table (e.g., `parent_id INT`). - `level `: The depth of the current item in the hierarchy (0 for the starting item, 1 for its direct children, and so on). - `branch_path `: If the `branch_delimiter` argument is provided to `connectby`, this column will contain a text representation of the path from the starting item to the current item, using the specified delimiter. **Example output:** | current_category_id | parent_id | level | branch_path | | :------------------ | :-------- | :---- | :---------- | | 1 | (null) | 0 | 1 | | 2 | 1 | 1 | 1>2 | | 3 | 2 | 2 | 1>2>3 | | 4 | 2 | 2 | 1>2>4 | | 5 | 1 | 1 | 1>5 | | 6 | 5 | 2 | 1>5>6 | - This output shows `Electronics` (ID 1) at `level` 0 with a `branch_path` of `1`. - `Computers` (ID 2) is a sub-category of `Electronics`, at `level` 1, with `branch_path` `1>2`. - `Laptops` (ID 3) is a sub-category of `Computers`, at `level` 2, with `branch_path` `1>2>3`, and so on. ## Important considerations - **`crosstab()` output definition**: You must always define the output columns and their types using the `AS (...)` clause when calling `crosstab()`. The number and types of these columns must match what your pivoted data will look like. - **`crosstab()` category ordering**: The order of columns generated by the single-argument `crosstab` depends on the `ORDER BY` clause of your source query and the natural sort order of the category values. For explicit column ordering and to ensure all desired categories appear, use the two-argument version of `crosstab`. - **Data types**: Pay close attention to data types. The types defined in the `AS (...)` clause for `crosstab` must match the 'value' column of the source query (for the pivoted value columns) and the row identifier column(s). For `connectby`, the key and parent key alias types in the `AS t(...)` clause must match the source table's corresponding column types. ## Conclusion The `tablefunc` extension in Postgres is a powerful tool for reshaping and analyzing data. It provides essential functions like `normal_rand()` for generating random numbers, `crosstab()` for creating pivot tables, and `connectby()` for traversing hierarchical data structures. ## Resources - [PostgreSQL `tablefunc` documentation](https://www.postgresql.org/docs/current/tablefunc.html) --- # Source: https://neon.com/llms/extensions-timescaledb.txt # The timescaledb extension > The document details the integration and usage of the timescaledb extension within Neon, enabling users to efficiently manage time-series data in their databases. ## Source - [The timescaledb extension HTML](https://neon.com/docs/extensions/timescaledb): The original HTML version of this documentation `timescaledb` enables the efficient storage and retrieval of time-series data. Time-series data is a sequential collection of observations or measurements recorded over time. For example, IoT devices continuously generate data points with timestamps, representing measurements or events. `timescaledb` is designed to handle large volumes of time-stamped data and provides SQL capabilities on top of a time-oriented data model such as IoT data, sensor readings, financial market data, and other time-series datasets. This guide provides an introduction to the `timescaledb` extension. You'll learn how to enable the extension in Neon, create hypertables, run simple queries, and analyze data using `timescaledb` functions. Finally, you'll see how to delete data to free up space. **Note**: `timescaledb` is an open-source extension for Postgres that can be installed on any Neon Project using the instructions below. **Version availability:** The version of `timescaledb` available on Neon depends on the version of Postgres you select for your Neon project. - Postgres 14 - `timescaledb` 2.10.1 - Postgres 15 - `timescaledb` 2.10.1 - Postgres 16 - `timescaledb` 2.13.0 - Postgres 17 - `timescaledb` 2.17.1 _Only [Apache-2](https://docs.timescale.com/about/latest/timescaledb-editions/) licensed features are supported. Compression is not supported._ ## Enable the `timescaledb` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the Neon **SQL Editor** or from a client such as `psql` that is connected to Neon. ```sql CREATE EXTENSION IF NOT EXISTS timescaledb; ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Create a hypertable `timescaledb` hypertables are a high-level abstraction, extending traditional Postgres tables to manage temporal data more effectively. A hypertable simplifies the organization and retrieval of time-series information by providing built-in partitioning based on time intervals. To begin with, create a SQL table for temperature data: ```sql CREATE TABLE weather_conditions ( time TIMESTAMP WITH TIME ZONE NOT NULL, device_id TEXT, temperature NUMERIC, humidity NUMERIC ); ``` Convert it to a hypertable using the [`create_hypertable`](https://docs.timescale.com/api/latest/hypertable/create_hypertable/) function: ```sql SELECT create_hypertable('weather_conditions', 'time'); ``` You should receive the following output: ```text | create_hypertable | |---------------------------------| | (3,public,weather_conditions,t) | ``` It is possible to use both standard SQL commands and `timescaledb` functions (which will be covered later). To use an SQL query to insert data in the `weather_conditions` table: ```sql INSERT INTO weather_conditions VALUES (NOW(), 'weather-pro-000002', 72.0, 52.0), (NOW(), 'weather-pro-000003', 71.5, 51.5), (NOW(), 'weather-pro-000004', 73.0, 53.2); ``` To retrieve the data by time in descending order: ```sql SELECT * FROM weather_conditions ORDER BY time DESC; ``` You should receive the following output: ```text | time | device_id | temperature | humidity | |-------------------------------|--------------------|-------------|----------| | 2024-01-15 13:30:27.464107+00 | weather-pro-000002 | 72.0 | 52.0 | | 2024-01-15 13:30:27.464107+00 | weather-pro-000003 | 71.5 | 51.5 | | 2024-01-15 13:30:27.464107+00 | weather-pro-000004 | 73.0 | 53.2 | ``` ## Load weather data You can use the [sample weather dataset from TimescaleDB](https://assets.timescale.com/docs/downloads/weather_small.tar.gz) and load it into your Neon database using [psql](https://neon.com/docs/connect/query-with-psql-editor). Download the weather data: ```shell curl https://assets.timescale.com/docs/downloads/weather_small.tar.gz -o weather_small.tar.gz tar -xvzf weather_small.tar.gz ``` Load the data into Neon database - enter the username, password, host and database name. You can find these details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ```shell psql 'postgresql://:@/?sslmode=require&channel_binding=require' -c "\COPY weather_conditions FROM weather_small_conditions.csv CSV" ``` You should receive the following output: ```text COPY 1000000 ``` ## Use hyperfunctions to analyze data You can now start using `timescaledb` functions to analyze the data. [**first()**](https://docs.timescale.com/api/latest/hyperfunctions/first/) Get the first temperature reading for each location: ```sql SELECT device_id, first(temperature, time) AS first_temperature FROM weather_conditions GROUP BY device_id LIMIT 10; ``` The aggregate function [`first`](https://docs.timescale.com/api/latest/hyperfunctions/first/) was used to get the earliest `temperature` value based on `time` within an aggregate group. You should receive the following output: ```text | device_id | first_temperature | |--------------------|--------------------| | weather-pro-000000 | 39.9 | | weather-pro-000001 | 32.4 | | weather-pro-000002 | 39.8 | | weather-pro-000003 | 36.8 | | weather-pro-000004 | 71.8 | | weather-pro-000005 | 71.8 | | weather-pro-000006 | 37 | | weather-pro-000007 | 72 | | weather-pro-000008 | 31.3 | | weather-pro-000009 | 84.4 | ``` [**last()**](https://docs.timescale.com/api/latest/hyperfunctions/last/) Get the latest temperature reading for each location: ```sql SELECT device_id, last(temperature, time) AS first_temperature FROM weather_conditions GROUP BY device_id LIMIT 10; ``` The aggregate function [`last`](https://docs.timescale.com/api/latest/hyperfunctions/last/) was used to get the latest `temperature` value based on `time` within an aggregate group. You should receive the following output: ```text | device_id | first_temperature | |--------------------|-------------------| | weather-pro-000000 | 42 | | weather-pro-000001 | 42 | | weather-pro-000002 | 72.0 | | weather-pro-000003 | 71.5 | | weather-pro-000004 | 73.0 | | weather-pro-000005 | 70.3 | | weather-pro-000006 | 42 | | weather-pro-000007 | 69.9 | | weather-pro-000008 | 42 | | weather-pro-000009 | 91 | ``` [**time_bucket()**](https://docs.timescale.com/api/latest/hyperfunctions/time_bucket/) Calculate the average temperature per hour for a specific device: ```sql SELECT time_bucket('1 hour', time) AS bucket_time, AVG(temperature) AS avg_temperature FROM weather_conditions WHERE device_id = 'weather-pro-000001' GROUP BY bucket_time ORDER BY bucket_time LIMIT 10; ``` The query uses the [`time_bucket`](https://docs.timescale.com/api/latest/hyperfunctions/time_bucket/) hyperfunction to group timestamps into one-hour intervals, calculating the average temperature for each interval from the table for a specific device, and then displays the results for the top 10 intervals. You should receive the following output: ```text | bucket_time | avg_temperature | |------------------------|---------------------| | 2016-11-15 12:00:00+00 | 32.76 | | 2016-11-15 13:00:00+00 | 33.60 | | 2016-11-15 14:00:00+00 | 34.83 | | 2016-11-15 15:00:00+00 | 36.26 | | 2016-11-15 16:00:00+00 | 37.19 | | 2016-11-15 17:00:00+00 | 38.12 | | 2016-11-15 18:00:00+00 | 39.02 | | 2016-11-15 19:00:00+00 | 40.03 | | 2016-11-15 20:00:00+00 | 40.87 | | 2016-11-15 21:00:00+00 | 41.93 | ``` [**histogram()**](https://docs.timescale.com/api/latest/hyperfunctions/histogram/) Bucket device humidity data: ```sql SELECT device_id, histogram(humidity, 40, 60, 5) FROM weather_conditions GROUP BY device_id LIMIT 10; ``` Here, we use the [`histogram`](https://docs.timescale.com/api/latest/hyperfunctions/histogram/) function to create a distribution of humidity values within specified buckets (`40` to `60` with a size of `5`) for each `device_id`. You should receive the following output: ```text | device_id | histogram | |--------------------|---------------------| | weather-pro-000000 | {0,0,0,710,290,0,0} | | weather-pro-000001 | {0,0,0,805,186,9,0} | | weather-pro-000002 | {0,0,0,217,784,0,0} | | weather-pro-000003 | {0,0,0,510,491,0,0} | | weather-pro-000004 | {0,0,0,1000,1,0,0} | | weather-pro-000005 | {0,0,0,1000,0,0,0} | | weather-pro-000006 | {0,0,0,999,1,0,0} | | weather-pro-000007 | {0,0,0,1000,0,0,0} | | weather-pro-000008 | {0,0,0,834,166,0,0} | | weather-pro-000009 | {0,0,0,0,0,0,1000} | ``` [**approximate_row_count()**](https://docs.timescale.com/api/latest/hyperfunctions/approximate_row_count/) Use the [`approximate_row_count`](https://docs.timescale.com/api/latest/hyperfunctions/approximate_row_count/) function to get the approximate number of rows in `weather_conditions` hypertable: ```sql SELECT approximate_row_count('weather_conditions'); ``` You should receive the following output: ```text | approximate_row_count | |-----------------------| | 1000000 | ``` ## Working with chunks Chunks are fundamental storage units within hypertables. Instead of storing the entire time-series dataset as a single monolithic table, `timescaledb` breaks it down into smaller, manageable chunks. Each chunk represents a distinct time interval, making data retrieval and maintenance more efficient. [**show_chunks()**](https://docs.timescale.com/api/latest/hypertable/show_chunks/) The [`show_chunks`](https://docs.timescale.com/api/latest/hypertable/show_chunks/) function can be used to understand the underlying structure and organization of your time-series data and provides insights into how your hypertable is partitioned. ```sql SELECT show_chunks('weather_conditions'); ``` You should receive the following output: ```text | show_chunks | |-----------------------------------------| | _timescaledb_internal._hyper_7_24_chunk | | _timescaledb_internal._hyper_7_25_chunk | ``` `show_chunks` output indicates the presence of two internal chunks within your hypertable. To show detailed chunks information: ```sql SELECT * FROM chunks_detailed_size('weather_conditions') ORDER BY chunk_name; ``` You should receive the following output: ```text | chunk_schema | chunk_name | table_bytes | index_bytes | toast_bytes | total_bytes | node_name | |-----------------------|-------------------|-------------|-------------|-------------|-------------|-----------| | _timescaledb_internal | _hyper_7_24_chunk | 8192 | 16384 | 8192 | 32768 | | | _timescaledb_internal | _hyper_7_25_chunk | 82190336 | 8249344 | 8192 | 90447872 | | ``` [**drop_chunks()**](https://docs.timescale.com/api/latest/hypertable/drop_chunks/) You can use the [`drop_chunks`](https://docs.timescale.com/api/latest/hypertable/drop_chunks/) function to remove data chunks whose time range falls completely before (or after) a specified time. ```sql SELECT drop_chunks('temperature_data', INTERVAL '1 days'); ``` It returns a list of the chunks that were dropped. You should receive the following output: ```text | drop_chunks | |-----------------------------------------| | _timescaledb_internal._hyper_4_19_chunk | | _timescaledb_internal._hyper_4_20_chunk | ``` ## Data deletion You may run into space concerns as data accumulates in timescaledb hypertables. While Neon's Postgres service does not support compression, deleting old data is an option if you don't need to hold on to it for long periods of time. You can use the [`drop_chunks`]() function outlined above to easily delete outdated chunks from a hypertable. For example, to delete all chunks older than 3 months: ```sql SELECT drop_chunks('temperature_data', INTERVAL '3 months'); ``` The query deletes any chunks that contain only data older than 3 months. To automatically run this deletion periodically, you can setup a cron task. For example, adding this line to the crontab will run the deletion query every day at 1AM: ```sql 0 1 * * * psql -c "SELECT drop_chunks('temperature_data', INTERVAL '3 months')" ``` **Note**: Please be aware that Neon's [Scale to Zero](https://neon.com/docs/guides/scale-to-zero-guide) feature may affect the running of scheduled jobs. It may be necessary to start the compute before running a job. This will help ensure the hypertable size is managed by deleting old unneeded data. Tune the interval passed to drop_chunks and the cron schedule based on your data retention needs. ## Conclusion You were able to configure the timescaledb extension in Neon and create a hypertable to store `weather` data. Then you executed simple queries and analyzed data using a combination of standard SQL and `timescaledb` functions before finally using `drop_chunks()` to delete data. ## Reference - [TimescaleDB editions](https://docs.timescale.com/about/latest/timescaledb-editions/) - [TimesscaleDB hyperfunctions](https://docs.timescale.com/api/latest/hyperfunctions/) --- # Source: https://neon.com/llms/extensions-unaccent.txt # The unaccent extension > The document details the unaccent extension for Neon, which removes accents from text in PostgreSQL databases, facilitating text normalization and search operations. ## Source - [The unaccent extension HTML](https://neon.com/docs/extensions/unaccent): The original HTML version of this documentation The `unaccent` extension for Postgres enables handling of text data in a more user-friendly and language-tolerant way. It allows you to remove [accents/stress]() ([diacritic signs](https://en.wikipedia.org/wiki/Diacritic)) from text strings, making it easier to perform searches and comparisons that are insensitive to accents. This is particularly useful in multilingual applications where users might not consistently use accents when typing search queries. Imagine a user searching for "Hôtel" but only typing "Hotel". Without `unaccent`, the database might not find the intended results. With `unaccent`, you can ensure that searches are more forgiving and return relevant results regardless of accent variations. This guide will walk you through the essentials of using the `unaccent` extension. You'll learn how to enable the extension on Neon, understand its key concepts, use it in queries, optimize performance with indexing, and consider its limitations. ## Enable the `unaccent` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS unaccent; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## Removing accents with `unaccent()` The primary function provided by the `unaccent` extension is `unaccent()`. This function takes a text input and returns the same text with accents removed. Let's see it in action with a few examples: ```sql SELECT unaccent('Hôtel'); -- Hotel SELECT unaccent('cliché'); -- cliche SELECT unaccent('naïve'); -- naive SELECT unaccent('café'); -- cafe SELECT unaccent('Déjà vu'); -- Deja vu ``` As you can see, `unaccent()` effectively strips the diacritics, transforming words with accents into their unaccented counterparts. This transformation is based on a set of rules that are configurable, allowing for customization to suit specific language needs. ## Practical usage examples `unaccent` is most commonly used to enhance text searching, making it more forgiving and user-friendly. Let's explore some typical use cases: ### Basic accent-insensitive searching Imagine you have a product catalog and want users to be able to search for products regardless of whether they use accents or not. For instance, a user might search for "cafe" or "café" and expect to find products containing "café". Consider a `products` table with the following data: ```sql CREATE TABLE products ( id SERIAL PRIMARY KEY, name TEXT ); INSERT INTO products (name) VALUES ('cafe'), ('café'), ('Café'), ('Café au lait'); ``` Without `unaccent`, a simple `WHERE` clause would differentiate between accented and unaccented characters: ```sql SELECT * FROM products WHERE name = 'café'; ``` ``` id | name ----|------ 2 | café ``` ```sql SELECT * FROM products WHERE name = 'cafe'; ``` ``` id | name ----|------ 1 | cafe ``` By applying `unaccent()` to both the column and the search term, you can achieve accent-insensitive matching: ```sql SELECT * FROM products WHERE unaccent(name) = unaccent('cafe'); ``` ``` id | name ---|------ 1 | cafe 2 | café ``` ### Case-insensitive and accent-insensitive searching with `ILIKE` For even more flexible searching, you can combine `unaccent()` with the [`ILIKE`](https://neon.com/postgresql/postgresql-tutorial/postgresql-like#postgresql-extensions-of-the-like-operator) operator for case-insensitive and accent-insensitive searches. This is particularly useful for free-text search scenarios. ```sql SELECT * FROM product WHERE unaccent(name) ILIKE unaccent('%cafe%'); ``` ``` id | name ---+------ 1 | cafe 2 | café 3 | Café 4 | Café au lait ``` In this example, `ILIKE` handles case-insensitivity (matching 'cafe', 'Cafe', etc.), and `unaccent()` ensures that accents are ignored during the comparison. Applying `unaccent()` to both sides of the `ILIKE` condition is crucial for this to work effectively. ### Integration with Full-Text search While `unaccent()` can be used directly in `WHERE` clauses, its true power for search applications is realized when integrated into with Postgres full-text search capabilities. `unaccent` is designed as a text search dictionary. By incorporating it into your text search configurations, you can ensure that indexing and searching operations automatically handle accent removal. This involves creating or modifying text search configurations to include the `unaccent` dictionary in the analysis process. When text is indexed and queried using such a configuration, accents are automatically stripped, leading to efficient and accent-insensitive full-text searches. **Note** Configuration Modifications in Neon: It's important to note that because `unaccent` is managed by Neon, modifying the default `unaccent.rules` file or other configuration settings requires administrative privileges that are not available to Neon users. If you have specific needs for customized `unaccent` rules or configurations, please [open a support ticket](https://console.neon.tech/app/projects?modal=support) to discuss your requirements with Neon support. ## Performance and Indexing Considerations Using `unaccent()` in queries can have performance implications, especially in large tables. Applying functions in `WHERE` clauses often prevents the database from efficiently using standard indexes. ### Indexing with `unaccent()` Directly indexing an expression like `unaccent(column)` typically doesn't work efficiently because, by default, `unaccent()` is not marked as an `IMMUTABLE` function. Postgres requires functions used in index expressions to be `IMMUTABLE` to guarantee consistent index usage. To enable indexing with `unaccent()`, you can create an `IMMUTABLE` wrapper function around it. This wrapper function essentially tells Postgres that the function's output will always be the same for a given input, allowing it to be used in index expressions. Here's an example of creating an `IMMUTABLE` wrapper function: ```sql CREATE OR REPLACE FUNCTION f_unaccent(text) RETURNS text AS $$ SELECT public.unaccent('public.unaccent', $1); $$ LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT; ``` **Explanation:** - `CREATE OR REPLACE FUNCTION f_unaccent(text) RETURNS text`: This defines a new function named `f_unaccent` that takes text as input and returns text. - `AS $$ ... $$ LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT`: This is the function body, written in SQL, and the important part is declaring it `IMMUTABLE`. - `SELECT public.unaccent('public.unaccent', $1);`: Inside the wrapper, we are calling the original `unaccent` function, making sure to schema-qualify it as `public.unaccent` for robustness. Once you have this `IMMUTABLE` wrapper function, you can create indexes on it: ```sql CREATE INDEX idx_products_name_unaccent ON products (f_unaccent(name)); ``` Now, queries using `f_unaccent(name)` in the `WHERE` clause can effectively utilize this index, significantly improving performance for accent-insensitive searches. ```sql SELECT * FROM products WHERE f_unaccent(name) = f_unaccent('cafe'); -- This query can now use the 'idx_products_name_unaccent' index ``` **Alternative: Generated columns** Another strategy for optimizing performance is to use generated columns. You can add a new column to your table that stores the unaccented version of your text data. This column can then be indexed directly and queried efficiently. ```sql ALTER TABLE products ADD COLUMN name_unaccented text GENERATED ALWAYS AS (f_unaccent(name)) STORED; CREATE INDEX idx_products_name_unaccent_generated ON products (name_unaccented); SELECT * FROM products WHERE name_unaccented = 'cafe'; -- This query will use the 'idx_products_name_unaccent_generated' index to find rows where unaccented 'name' matches 'cafe' ``` Generated columns add storage overhead but can offer performance benefits for read-heavy workloads. ## Limitations While `unaccent` is very useful, it's important to be aware of its limitations: - **Rule-based:** `unaccent` operates based on a predefined set of rules defined in its configuration file (`unaccent.rules`). The effectiveness of accent removal depends on the completeness and accuracy of these rules for your target languages. - **Language specificity:** The default rules are primarily geared towards European languages. For languages with different diacritic systems or complex character transformations, the default rules might not be sufficient, and customization of the `unaccent.rules` file might be required. - **No contextual understanding:** `unaccent` performs a character-by-character transformation based on its rules. It does not understand the context or meaning of words. In some cases, this might lead to over-simplification or loss of subtle distinctions in meaning that accents might convey in certain languages. ## Conclusion The `unaccent` extension is a valuable tool for handling text data in Postgres, especially in multilingual applications where accent-insensitive searching is essential. By enabling `unaccent`, you can ensure that your database is more user-friendly and tolerant of accent variations in search queries. ## Resources - [`unaccent` Extension in PostgreSQL Documentation](https://www.postgresql.org/docs/current/unaccent.html) --- # Source: https://neon.com/llms/extensions-uuid-ossp.txt # The uuid-ossp extension > The document explains how to use the uuid-ossp extension in Neon to generate universally unique identifiers (UUIDs) within PostgreSQL databases. ## Source - [The uuid-ossp extension HTML](https://neon.com/docs/extensions/uuid-ossp): The original HTML version of this documentation The `uuid-ossp` extension provides a suite of functions for generating Universally Unique Identifiers (UUIDs) directly within your Postgres database. UUIDs are essential for ensuring data uniqueness across distributed systems and are widely used as primary keys and for various other applications requiring unique IDs. This extension offers a variety of UUID generation methods, including time-based, random, and name-based UUIDs, providing flexibility for different use cases. This guide provides an introduction to the `uuid-ossp` extension. You'll learn how to enable the extension on Neon, explore the functions for generating different types of UUIDs, understand common use cases where UUIDs are beneficial, and consider important aspects like performance and security. **Note**: `uuid-ossp` is a widely-used Postgres extension that offers a range of UUID generation methods beyond the basic built-in functions. It is particularly valuable when you need specific types of UUIDs or require deterministic, name-based UUIDs for consistent identifiers. ## Enable the `uuid-ossp` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. ## UUID Functions The `uuid-ossp` extension offers a range of functions for generating UUIDs, each with unique characteristics. Let's explore each function in detail: ### Version 1 UUIDs (time-based) The version 1 UUID generation functions in `uuid-ossp` are based on the time of creation and the MAC address of the generating machine. These UUIDs are suitable for scenarios where time-based ordering is important, and uniqueness across distributed systems is required. - `uuid_generate_v1()`: The `uuid_generate_v1()` function generates UUIDs based on the [version 1 algorithm](https://datatracker.ietf.org/doc/html/rfc4122#section-4.2.2). Version 1 UUIDs are time-based, meaning they incorporate the current timestamp and the MAC address of the computer where the UUID is generated. This approach leverages the uniqueness of hardware identifiers and precise time to create UUIDs that are likely to be unique across different systems and over time. ```sql SELECT uuid_generate_v1(); -- 506a753c-02fe-11f0-9122-6f83fcb8d092 (example output) ``` **Important** Privacy and Security Considerations for Version 1 UUIDs: It's crucial to be aware that Version 1 UUIDs embed the MAC address of the generating computer. This can present privacy and security concerns because: - **Machine identification:** The MAC address can potentially be used to identify the specific machine that generated the UUID, raising privacy issues if this information should remain confidential. - **Predictability:** The time component and the structure of Version 1 UUIDs make them somewhat predictable, which could be a security risk in certain applications where UUIDs are used for security-sensitive purposes. If privacy or predictability is a concern, consider using `uuid_generate_v1mc()` or version 4 UUIDs instead which are discussed below. - **`uuid_generate_v1mc()`: Version 1 UUIDs with multicast MAC address** The `uuid_generate_v1mc()` function is similar to `uuid_generate_v1()` but addresses the privacy concerns by using a randomly generated multicast MAC address instead of the actual MAC address of the computer. ```sql SELECT uuid_generate_v1mc(); -- 8b119520-02ff-11f0-9d55-6761ef62a796 (example output) ``` ### Version 3 UUIDs (name-based, MD5 hash) - `uuid_generate_v3(namespace uuid, name text)`: The `uuid_generate_v3(namespace uuid, name text)` function generates version 3 UUIDs. These UUIDs are name-based and deterministic, meaning they are generated by hashing an input `name` using the [MD5 algorithm](https://en.wikipedia.org/wiki/MD5) within a specified `namespace`. For the same `namespace` UUID and input `name`, this function will always produce the exact same UUID. ```sql SELECT uuid_generate_v3(uuid_ns_url(), 'https://example.com'); -- 68794df6-5e20-385f-ab08-bb73f8a433cb (always the same for 'https://example.com') ``` Here, `uuid_ns_url()` is a predefined UUID constant representing the URL namespace, which is used as the `namespace` argument for generating UUIDs based on URLs. Available predefined namespace UUIDs are discussed in the [UUID Constants](https://neon.com/docs/extensions/uuid-ossp#uuid-constants) section below. **Use cases:** - Generating consistent identifiers for entities based on a name, such as creating a UUID for a URL, DNS name, or any other string identifier. - Scenarios where you need to ensure that generating a UUID for the same entity (identified by name within a namespace) always results in the same UUID across different systems or over time. - Content Management Systems where stable identifiers for content pieces are required, regardless of access time or location. ### Version 4 UUIDs (random) - `uuid_generate_v4()`: The `uuid_generate_v4()` function generates version 4 UUIDs, which are derived entirely from random numbers. These are the most common type of UUIDs due to their simplicity and strong guarantee of uniqueness. Postgres also provides the built-in function [`gen_random_uuid()`](https://neon.com/postgresql/postgresql-tutorial/postgresql-uuid#generating-uuid-values) which is functionally equivalent to `uuid_generate_v4()`. ```sql SELECT uuid_generate_v4(); -- 08e776b5-0652-431e-a841-5840616b500b (example output) ``` **Key characteristics of Version 4 UUIDs:** - **Randomly generated:** Based on high-quality random number generators. - **High uniqueness probability:** Extremely low probability of collision, making them suitable for most applications requiring unique identifiers. **Use Cases:** - General-purpose unique identifiers where predictability or specific ordering is not required. - Primary keys for database tables, especially in distributed systems. - Identifying records in systems where high randomness and uniqueness are paramount. - Simplifying UUID generation when deterministic or time-based approaches are not necessary. ### Version 5 UUIDs (name-based, SHA-1 hash) - `uuid_generate_v5(namespace uuid, name text)`: The `uuid_generate_v5(namespace uuid, name text)` function is similar to `uuid_generate_v3()` but uses the [SHA-1 hashing algorithm](https://en.wikipedia.org/wiki/SHA-1) instead of MD5. Version 5 UUIDs are also name-based and deterministic, producing the same UUID for the same input namespace and name. ```sql SELECT uuid_generate_v5(uuid_ns_dns(), 'example.com'); -- cfbff0d1-9375-5685-968c-48ce8b15ae17 (always the same for 'example.com') ``` **Tip** Version 3 vs. Version 5 UUIDs: While both Version 3 and Version 5 provide deterministic, name-based UUIDs, Version 5 is generally recommended due to the use of SHA-1 hashing, which is considered more secure than MD5. If security is a significant concern for your application, Version 5 is the better choice. ### UUID constants `uuid-ossp` also provides functions to return predefined UUID constants, which are particularly useful as standard namespace identifiers for Version 3 and Version 5 UUID generation: - **`uuid_nil()`: The Nil UUID constant** The `uuid_nil()` function returns the predefined "nil" UUID constant: `'00000000-0000-0000-0000-000000000000'`. ```sql SELECT uuid_nil(); ``` **Purpose of the Nil UUID:** - **Representing absence:** Similar to `NULL` for other data types, the nil UUID is often used to indicate the absence of a UUID value or as a default placeholder. - **Special value:** It does not correspond to any real-world generated UUID and is a specific, non-existent UUID value for particular use cases. **Use cases:** - Initializing UUID columns when a valid UUID is not yet available. - Using it as a sentinel value in code or database operations to represent "no UUID". - **Namespace UUID constants (`uuid_ns_dns()`, `uuid_ns_url()`, `uuid_ns_oid()`, `uuid_ns_x500()`):** These functions return constant UUIDs that are specifically designated as namespaces for different identifier types, as per the UUID specification. They are intended to be used as the `namespace` argument in `uuid_generate_v3()` and `uuid_generate_v5()` functions. - `uuid_ns_dns()` Represents the DNS namespace, intended for generating UUIDs from DNS names. ```sql SELECT uuid_generate_v5(uuid_ns_dns(), 'example.com'); -- cfbff0d1-9375-5685-968c-48ce8b15ae17 (always the same for 'example.com') SELECT uuid_generate_v3(uuid_ns_dns(), 'example.com'); -- 9073926b-929f-31c2-abc9-fad77ae3e8eb (always the same for 'example.com') ``` - `uuid_ns_url()` Represents the URL namespace, for generating UUIDs from URLs. ```sql SELECT uuid_generate_v3(uuid_ns_url(), 'https://example.com'); -- 68794df6-5e20-385f-ab08-bb73f8a433cb (always the same for 'https://example.com') ``` - `uuid_ns_oid()` Represents the ISO Object Identifier (OID) namespace. Note that these OIDs refer to the ASN.1 standard and are distinct from PostgreSQL's internal OIDs. ```sql SELECT uuid_generate_v5(uuid_ns_oid(), '12345'); -- bf547c8b-0674-5afe-97ad-d6e7556e56fa (always the same for '12345') ``` - `uuid_ns_x500()` Represents the X.500 Distinguished Name (DN) namespace. ```sql SELECT uuid_generate_v5(uuid_ns_x500(), 'CN=John Doe, DC=example, DC=com'); -- e9ba549f-a675-5490-b054-ad862cb8c1d2 (always the same for 'CN=John Doe, DC=example, DC=com') ``` **Usage of namespace UUID constants:** These constants are crucial for generating deterministic UUIDs based on specific namespaces, ensuring consistent UUIDs for the same input name across different systems. ## Performance and storage considerations While UUIDs offer significant advantages, it's important to be aware of potential performance and storage implications: - **Storage size:** UUIDs are 128-bit values (16 bytes), which are larger than typical integer primary keys (4 bytes for `integer`, 8 bytes for `bigint`). This increased size can lead to higher storage requirements, especially in tables with a very large number of rows. - **Indexing performance:** Randomly generated UUIDs (version 4) can lead to less efficient indexing compared to sequential integer IDs. Inserting rows with random UUIDs as primary keys can cause index fragmentation, as new entries are inserted at random locations in the index. This can slow down write operations and potentially affect read query performance, especially in very large tables and under high write loads. However, using sequential or time-ordered UUIDs (like version 1) can mitigate this issue. ## Conclusion The `uuid_ossp` extension is a valuable tool for UUID generation in Postgres. It offers diverse functions for creating UUIDs tailored to various needs: random, name-based, or time-based. To effectively use `uuid_ossp`, remember these key recommendations: - **For general use, version 4 UUIDs (`uuid_generate_v4()` or `gen_random_uuid()`) are ideal for random unique IDs.** - **For deterministic IDs, choose version 5 (`uuid_generate_v5()`) over version 3 (`uuid_generate_v3()`) for better security.** - **Use version 1 UUIDs (`uuid_generate_v1()` or `uuid_generate_v1mc()`) only when time-based ordering is essential, keeping privacy implications in mind.** By selecting the appropriate UUID version based on your requirements, you can ensure the uniqueness and consistency of identifiers in your Postgres database. ## Resources - [`uuid-ossp` on PostgreSQL Documentation](https://www.postgresql.org/docs/current/uuid-ossp.html) - [RFC 4122 - A Universally Unique IDentifier (UUID) URN Namespace](https://www.rfc-editor.org/rfc/rfc4122) --- # Source: https://neon.com/llms/extensions-wal2json.txt # The wal2json plugin > The document details the wal2json plugin for Neon, explaining its role in converting PostgreSQL Write-Ahead Logging (WAL) data into JSON format for easier data processing and integration. ## Source - [The wal2json plugin HTML](https://neon.com/docs/extensions/wal2json): The original HTML version of this documentation The `wal2json` plugin is a logical replication decoding output plugin for Postgres. It lets you convert the Write-Ahead Log (WAL) changes into JSON format, making it easier to consume and process database changes in various applications, such as data replication, auditing, event-driven services, and real-time analytics. This guide describes the `wal2json` plugin — how to enable it in Neon, configure its output, and use it to capture and process database changes in JSON format. WAL decoding is crucial for building robust data pipelines, implementing Change Data Capture (CDC) systems, and maintaining data consistency across distributed systems. **Note**: The `wal2json` plugin is included in your Neon project and doesn't require a separate installation. **Version availability:** The `wal2json` plugin is available in all Postgres versions supported by Neon. For the most up-to-date information on supported versions, please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon. ## Enable logical replication Before using the `wal2json` plugin, you need to enable logical replication for your Neon project. Navigate to the **Settings** page in your Neon Project Dashaboard, and select **Beta** from the list of options. Click **Enable** to enable logical replication. **Note**: Once enabled for a project, logical replication cannot be reverted. This action triggers a restart of all active compute endpoints in your Neon project. Any active connections will be dropped and have to reconnect. To verify that logical replication is enabled, navigate to the `SQL Editor` and verify the output of the following query: ```sql SHOW wal_level; wal_level ----------- logical (1 row) ``` For information about using the Neon SQL Editor, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). For information about using the `psql` client with Neon, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Create a replication slot To start using `wal2json`, you first need to create a replication slot that explcitly specifies `wal2json` as the decoder plugin. You can do this by running the following query: ```sql SELECT 'start' FROM pg_create_logical_replication_slot('test_slot', 'wal2json'); ``` This creates a replication slot named `test_slot` using the `wal2json` plugin. Now, we can query this slot to listen for changes to any tables in the database. ## Example - use `wal2json` to capture changes to a table Suppose we have a table named `inventory` that stores information about products for a retail store. We want to capture changes to this table in real-time and process them using `wal2json`. Run the following query to create the `inventory` table, and insert some sample data: ```sql CREATE TABLE inventory ( id SERIAL PRIMARY KEY, product_name VARCHAR(100), quantity INTEGER, last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); INSERT INTO inventory (product_name, quantity) VALUES ('Widget A', 100), ('Gadget B', 50), ('Gizmo C', 75); ``` With logical decoding enabled, Postgres streams changes to the `inventory` table to the `test_slot` replication slot. Run the following query to observe the messages that have been published to it: ```sql SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', 'on'); ``` This query returns the changes in JSON format. Each change will be represented as a separate JSON object. ```plaintext lsn | xid | data -----------+------+------------------------------------------------------------------------------------------------------------------------- 0/24E7950 | 2055 | { + | | "change": [ + | | ] + | | } 0/24E7D60 | 2056 | { + | | "change": [ + | | { + | | "kind": "insert", + | | "schema": "public", + | | "table": "inventory", + | | "columnnames": ["id", "product_name", "quantity", "last_updated"], + | | "columntypes": ["integer", "character varying(100)", "integer", "timestamp without time zone"],+ | | "columnvalues": [1, "Widget A", 100, "2024-07-30 09:53:26.078749"] + | | } + | | ,{ + | | "kind": "insert", + | | "schema": "public", + | | "table": "inventory", + | | "columnnames": ["id", "product_name", "quantity", "last_updated"], + | | "columntypes": ["integer", "character varying(100)", "integer", "timestamp without time zone"],+ | | "columnvalues": [2, "Gadget B", 50, "2024-07-30 09:53:26.078749"] + | | } + | | ,{ + | | "kind": "insert", + | | "schema": "public", + | | "table": "inventory", + | | "columnnames": ["id", "product_name", "quantity", "last_updated"], + | | "columntypes": ["integer", "character varying(100)", "integer", "timestamp without time zone"],+ | | "columnvalues": [3, "Gizmo C", 75, "2024-07-30 09:53:26.078749"] + | | } + | | ] + | | } (2 rows) ``` There are two rows in the query output above. The first row corresponds to the `CREATE TABLE` statement that we ran earlier. Logical decoding only captures information about DML (data manipulation) events — `INSERT`, `UPDATE`, and `DELETE` statements, hence this row is empty. The second row corresponds to the `INSERT` statement that added rows to the `inventory` table. Next, we update an existing row in the `inventory` table: ```sql UPDATE inventory SET quantity = quantity + 100 WHERE product_name = 'Widget A'; ``` We can now query the `test_slot` replication slot again to see the new information published as a result of the update: ```sql SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', 'on'); ``` This query returns a single row in JSON format, corresponding to the row updated. ```plaintext lsn | xid | data -----------+------+------------------------------------------------------------------------------------------------------------------------- 0/24EC940 | 2057 | { + | | "change": [ + | | { + | | "kind": "update", + | | "schema": "public", + | | "table": "inventory", + | | "columnnames": ["id", "product_name", "quantity", "last_updated"], + | | "columntypes": ["integer", "character varying(100)", "integer", "timestamp without time zone"],+ | | "columnvalues": [1, "Widget A", 200, "2024-07-30 09:53:26.078749"], + | | "oldkeys": { + | | "keynames": ["id"], + | | "keytypes": ["integer"], + | | "keyvalues": [1] + | | } + | | } + | | ] + | | } (1 row) ``` ## Format versions: 1 vs 2 The `wal2json` plugin supports two different output format versions. The default format version is 1, which produces a JSON object per transaction. All new and old tuples are available within this single JSON object. This format is useful when you need to process entire transactions as atomic units. Format version 2 produces a JSON object per tuple (row), with optional JSON objects for the beginning and end of each transaction. This format is more granular and can be useful when you need to process changes on a row-by-row basis. Both formats support various options to include additional properties such as transaction timestamps, schema-qualified names, data types, and transaction IDs. To use format version 2, you need to specify it explicitly: ```sql SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'format-version', '2'); ``` To illustrate, we add a couple more product entries to the `inventory` table: ```sql INSERT INTO inventory (product_name, quantity) VALUES ('Widget D', 200), ('Gizmo E', 75); ``` Now, we can query the `test_slot` replication slot again to see the new information published as a result of the update: ```sql SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', 'on', 'format-version', '2'); ``` The output of this query appears as follows. You can see that there is a separate JSON object for each row inserted. ```plaintext lsn | xid | data -----------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0/24F18D8 | 3078 | {"action":"B"} 0/24F1940 | 3078 | {"action":"I","schema":"public","table":"inventory","columns":[{"name":"id","type":"integer","value":8},{"name":"product_name","type":"character varying(100)","value":"Widget D"},{"name":"quantity","type":"integer","value":200},{"name":"last_updated","type":"timestamp without time zone","value":"2024-07-30 10:27:45.428407"}]} 0/24F1A48 | 3078 | {"action":"I","schema":"public","table":"inventory","columns":[{"name":"id","type":"integer","value":9},{"name":"product_name","type":"character varying(100)","value":"Gizmo E"},{"name":"quantity","type":"integer","value":75},{"name":"last_updated","type":"timestamp without time zone","value":"2024-07-30 10:27:45.428407"}]} 0/24F1B10 | 3078 | {"action":"C"} (4 rows) ``` ## Use `wal2json` with tables without a primary key `REPLICA IDENTITY` is a table property that determines what information is written to the WAL when a row is updated or deleted. This property is crucial for `wal2json` when working with tables that don't have a primary key. `REPLICA IDENTITY` has four possible settings: 1. `DEFAULT`: Only primary key columns are logged for `UPDATE` and `DELETE` operations. 2. `USING INDEX`: A specified index's columns are logged for `UPDATE` and `DELETE` operations. 3. `FULL`: All columns are logged for `UPDATE` and `DELETE` operations. 4. `NOTHING`: No information is logged for `UPDATE` and `DELETE` operations. Tables use the `DEFAULT` setting by default. For tables without a primary key, this means no information is logged for updates and deletes. Let's create a table without a primary key and see how `wal2json` behaves: ```sql CREATE TABLE products_no_pk ( product_name VARCHAR(100), quantity INTEGER, price DECIMAL(10, 2) ); INSERT INTO products_no_pk (product_name, quantity, price) VALUES ('Widget', 100, 19.99); UPDATE products_no_pk SET quantity = 90 WHERE product_name = 'Widget'; ``` The `wal2json` output for this update operation will not contain any information about the updated row due to the lack of a primary key and the `DEFAULT REPLICA IDENTITY` setting. ```plaintext WARNING: table "products_no_pk" without primary key or replica identity is nothing lsn | xid | data -----------+------+--------------------- 0/256D6C8 | 6151 | { + | | "change": [+ | | ] + | | } (1 row) ``` To capture changes for tables without a primary key, we can change the `REPLICA IDENTITY` to `FULL`: ```sql ALTER TABLE products_no_pk REPLICA IDENTITY FULL; UPDATE products_no_pk SET price = 21.99 WHERE product_name = 'Widget'; ``` Now, the `wal2json` output will include both the old and new values for all columns, which can be used to identify the changed row. To verify, we can query the `test_slot` replication slot again: ```sql SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'pretty-print', 'on'); ``` The output of this query appears as follows: ```plaintext lsn | xid | data -----------+------+----------------------------------------------------------------------------------------------------- 0/256E228 | 6152 | { + | | "change": [ + | | ] + | | } 0/256E310 | 6153 | { + | | "change": [ + | | { + | | "kind": "update", + | | "schema": "public", + | | "table": "products_no_pk", + | | "columnnames": ["product_name", "quantity", "price"], + | | "columntypes": ["character varying(100)", "integer", "numeric(10,2)"], + | | "columnvalues": ["Widget", 90, 21.99], + | | "oldkeys": { + | | "keynames": ["product_name", "quantity", "price"], + | | "keytypes": ["character varying(100)", "integer", "numeric(10,2)"],+ | | "keyvalues": ["Widget", 90, 19.99] + | | } + | | } + | | ] + | | } (2 rows) ``` ## Performance considerations When working with `wal2json`, keep the following performance considerations in mind: 1. **Replication slot management**: Unused replication slots can prevent WAL segments from being removed, potentially causing disk space issues. Regularly monitor and clean up unused slots. 2. **Batch processing**: Instead of processing each change individually, consider batching changes for more efficient processing. 3. **Resource usage**: Be mindful of network bandwidth usage, especially when dealing with high-volume changes or when replicating over a wide area network. Additionally, decoding WAL to JSON can be CPU-intensive. Monitor your system's CPU usage and consider scaling your resources if needed. ## Conclusion The `wal2json` plugin is a powerful tool for capturing and processing database changes in JSON format. We've seen how to enable it, configure its output, and use it in various scenarios. Whether you're implementing a data replication system, building an audit trail, or creating an event-driven architecture, `wal2json` provides a flexible and efficient way to work with the Postgres Write-Ahead Log (WAL). ## Resources - [wal2json GitHub Repository](https://github.com/eulerto/wal2json) - [PostgreSQL Logical Decoding](https://www.postgresql.org/docs/current/logicaldecoding.html) - [Manage logical replication in Neon - Decoder plugins](https://neon.com/docs/guides/logical-replication-manage#decoder-plugins) --- # Source: https://neon.com/llms/extensions-xml2.txt # The xml2 extension > The document details the xml2 extension for Neon, explaining its functionality for parsing and querying XML data within PostgreSQL databases. ## Source - [The xml2 extension HTML](https://neon.com/docs/extensions/xml2): The original HTML version of this documentation The `xml2` extension for Postgres provides functions to parse XML data, evaluate XPath queries against it, and perform XSLT transformations. This can be useful for applications that need to process or extract information from XML documents stored within the database. ## Enable the `xml2` extension You can enable the extension by running the following `CREATE EXTENSION` statement in the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. ```sql CREATE EXTENSION IF NOT EXISTS xml2; ``` **Version availability:** Please refer to the [list of all extensions](https://neon.com/docs/extensions/pg-extensions) available in Neon for up-to-date extension version information. **Note**: The `xml2` extension was developed to provide robust XML processing capabilities within Postgres before the SQL/XML standard features were fully integrated. While it offers useful functions for XPath querying and XSLT, the SQL/XML standard now provides a more comprehensive and standardized approach to XML manipulation. ## `xml2` functions The `xml2` module provides functions for XML parsing, XPath querying, and XSLT transformations. ### XML parsing and validation - **`xml_valid(document text) → boolean`** Parses the given XML document string and returns `true` if it is well-formed XML, `false` otherwise. ```sql SELECT xml_valid('My Book'); -- true SELECT xml_valid('My Book'); -- false (not well-formed) ``` ### XPath querying functions These functions evaluate an XPath expression on a given XML document. - **`xpath_string(document text, query text) → text`** Evaluates the XPath query and casts the result to a text string. ```sql SELECT xpath_string('My Adventures', '/book/title/text()'); -- My Adventures ``` - **`xpath_number(document text, query text) → real`** Evaluates the XPath query and casts the result to a real number. ```sql SELECT xpath_number('19.95', '/book/price/text()'); -- 19.95 ``` - **`xpath_bool(document text, query text) → boolean`** Evaluates the XPath query and casts the result to a boolean. ```sql SELECT xpath_bool('', '/book/@available="true"'); -- true ``` - **`xpath_nodeset(document text, query text, toptag text, itemtag text) → text`** Evaluates the query and wraps the resulting nodeset in the specified `toptag` and `itemtag` XML tags. If `toptag` or `itemtag` is an empty string, the respective tag is omitted. There are also two-argument and three-argument versions: - `xpath_nodeset(document text, query text)`: Omits both `toptag` and `itemtag`. - `xpath_nodeset(document text, query text, itemtag text)`: Omits `toptag`. ```sql SELECT xpath_nodeset( 'Book ABook B', '//title', 'results', 'entry' ); -- Book ABook B SELECT xpath_nodeset( 'Book A', '//title/text()' ); -- Book A -- To get XML nodes: SELECT xpath_nodeset( 'Book ABook B', '//title' ); -- Book ABook B ``` - **`xpath_list(document text, query text, separator text) → text`** Evaluates the query and returns multiple text values separated by the specified `separator`. There is also a two-argument version `xpath_list(document text, query text)` which uses a comma (`,`) as the separator. ```sql SELECT xpath_list( 'Author 1Author 2', '//author/text()', '; ' ); -- Author 1; Author 2 ``` ### `xpath_table` function The `xpath_table` function is a powerful tool for extracting data from a set of XML documents and returning it as a relational table. `xpath_table(key text, document text, relation text, xpaths text, criteria text) returns setof record` **Parameters:** - `key`: The name of the "key" field from the source table. This field identifies the record from which each output row came and is returned as the first column. - `document`: The name of the field in the source table containing the XML document. - `relation`: The name of the table or view containing the XML documents. - `xpaths`: One or more XPath expressions, separated by `|`, to extract data. - `criteria`: The content of a `WHERE` clause to filter rows from the `relation`. This cannot be omitted; use `true` to process all rows. The function constructs and executes a SQL `SELECT` statement internally. The `key` and `document` parameters must resolve to exactly two columns in this internal select. `xpath_table` must be used in a `FROM` clause, and an `AS` clause is required to define the output column names and types. The first column in the `AS` clause corresponds to the `key`. **Example:** Suppose you have a table `catalog_items`: ```sql CREATE TABLE catalog_items ( item_sku TEXT PRIMARY KEY, item_details XML, added_on_date DATE ); INSERT INTO catalog_items (item_sku, item_details, added_on_date) VALUES ('WDGT-001', XMLPARSE(DOCUMENT 'Super Widget150Gadgets'), '2025-03-10'), ('TOOL-005', XMLPARSE(DOCUMENT 'Mega Wrench75Tools'), '2025-04-02'); ``` You can use `xpath_table` to extract data: ```sql SELECT * FROM xpath_table( 'item_sku', -- The key column from catalog_items 'item_details', -- The XML column from catalog_items 'catalog_items', -- The source table '/item/name/text()|/item/stock_level/text()|/item/category/text()', -- XPath expressions 'added_on_date >= ''2025-01-01''' -- Criteria for filtering ) AS extracted_data( -- Alias for the output table and its columns product_sku TEXT, product_name TEXT, current_stock INTEGER, product_category TEXT ); ``` **Output:** | product_sku | product_name | current_stock | product_category | | :---------- | :----------- | :------------ | :--------------- | | WDGT-001 | Super Widget | 150 | Gadgets | | TOOL-005 | Mega Wrench | 75 | Tools | **Data type conversion:** `xpath_table` internally deals with string representations of XPath results. When you specify a data type (e.g., `INTEGER`) in the `AS` clause, Postgres attempts to convert the string to that type. If conversion fails (e.g., an empty string or non-numeric text to `INTEGER`), an error occurs. It might be safer to extract as `TEXT` and then cast explicitly if data quality is uncertain. ### XSLT functions The `xml2` extension provides functions for XSLT (Extensible Stylesheet Language Transformations). - **`xslt_process(document text, stylesheet text, paramlist text) returns text`** Applies the XSL `stylesheet` to the XML `document` and returns the transformed text. The `paramlist` argument accepts a string containing parameter assignments for the transformation, formatted as key-value pairs separated by commas (e.g., `'name=value,debug=1'`). It's important to note that due to the straightforward parsing mechanism, individual parameter values within this list cannot themselves contain commas. - **`xslt_process(document text, stylesheet text) returns text`** A two-parameter version that applies the stylesheet without passing any external parameters. **Example:** Let's say you have an XML document `my_data.xml`: ```xml Hello ``` And `my_stylesheet.xsl` contains an XSLT to transform `Hello` into `Hello`: ```xml ``` You can apply the XSLT transformation using `xslt_process`. Here's an example of how to do this in Postgres: ```sql DO $$ DECLARE xml_doc TEXT := 'Hello'; xslt_style TEXT := ''; transformed_xml TEXT; BEGIN transformed_xml := xslt_process(xml_doc, xslt_style); RAISE NOTICE '%', transformed_xml; END $$; -- Output: Hello ``` ## Conclusion The `xml2` extension provides powerful tools for working with XML data in Postgres. It allows you to parse, query, and transform XML documents using XPath and XSLT. This can be particularly useful for applications that need to handle XML data efficiently within the database. ## Resources - [PostgreSQL `xml2` documentation](https://www.Postgres.org/docs/current/xml2.html) - [PostgreSQL XML Data Type](https://neon.com/postgresql/postgresql-tutorial/postgresql-xml-data-type) --- # Source: https://neon.com/llms/functions-age.txt # Postgres age() function > The document explains the usage of the Postgres `age()` function within Neon, detailing how it calculates the interval between two timestamps, which is useful for managing and querying temporal data. ## Source - [Postgres age() function HTML](https://neon.com/docs/functions/age): The original HTML version of this documentation The Postgres `age()` function calculates the difference between two timestamps or the difference between a timestamp and the current date and time. This function is particularly useful for calculating ages, durations, or time intervals in various applications. For example, you can use it to determine a person's age, calculate the time elapsed since an event, or find the duration of a process or subscription. ## Function signatures The `age()` function has two forms: ```sql age(timestamp, timestamp) -> interval ``` This form produces an interval by subtracting the second timestamp from the first. - First argument: The end timestamp - Second argument: The start timestamp ```sql age(timestamp) -> interval ``` This form subtracts the given timestamp from the timestamp for the current date (at midnight). ## Example usage Let's consider a table called `employees` that stores employee information, including their birth dates. We can use the `age()` function to calculate the age of employees. ```sql CREATE TABLE employees ( id SERIAL PRIMARY KEY, name TEXT, birth_date DATE, hire_date DATE ); INSERT INTO employees (name, birth_date, hire_date) VALUES ('John Doe', '1985-05-15', '2010-03-01'), ('Jane Smith', '1990-08-22', '2015-07-10'), ('Bob Johnson', '1978-12-03', '2005-11-15'); SELECT name, birth_date, age(birth_date) AS age FROM employees; ``` This query calculates the age of each employee based on their birth date. ``` name | birth_date | age -------------+------------+------------------------- John Doe | 1985-05-15 | 39 years 1 mon 10 days Jane Smith | 1990-08-22 | 33 years 10 mons 3 days Bob Johnson | 1978-12-03 | 45 years 6 mons 22 days (3 rows) ``` We can also use the `age()` function with two timestamps to calculate the duration of employment for each employee: ```sql SELECT name, hire_date, age(CURRENT_DATE, hire_date) AS employment_duration FROM employees; ``` This query calculates how long each employee has been with the company. ```text name | hire_date | employment_duration -------------+------------+------------------------- John Doe | 2010-03-01 | 14 years 3 mons 24 days Jane Smith | 2015-07-10 | 8 years 11 mons 15 days Bob Johnson | 2005-11-15 | 18 years 7 mons 10 days (3 rows) ``` ## Advanced examples ### Use `age()` for time-based calculations The `age()` function can be useful for various time-based calculations. For example, consider a `projects` table that tracks the start date and deadline for projects. We can use `age()` to calculate project durations and remaining time: ```sql WITH projects(name, start_date, deadline) AS ( VALUES ('Project A', '2023-01-15'::DATE, '2024-06-30'::DATE), ('Project B', '2023-05-01'::DATE, '2023-12-31'::DATE), ('Project C', '2024-03-01'::DATE, '2025-02-28'::DATE) ) SELECT name, start_date, deadline, age(deadline, start_date) AS total_duration, age(deadline, CURRENT_DATE) AS remaining_time FROM projects; ``` This query calculates the total duration of each project and the time remaining until the deadline. ```text name | start_date | deadline | total_duration | remaining_time -----------+------------+------------+-----------------------+------------------ Project A | 2023-01-15 | 2024-06-30 | 1 year 5 mons 15 days | 5 days Project B | 2023-05-01 | 2023-12-31 | 7 mons 30 days | -5 mons -25 days Project C | 2024-03-01 | 2025-02-28 | 11 mons 27 days | 8 mons 3 days (3 rows) ``` ### Extract specific units from age intervals You can extract specific units of time (like years, months, or days) from the interval returned by the `age()` function. Here's an example that breaks down the age into years, months, and days: ```sql WITH sample_dates(name, birth_date) AS ( VALUES ('Alice', '1990-03-15'::DATE), ('Bob', '1985-11-30'::DATE), ('Charlie', '1995-07-22'::DATE) ) SELECT name, birth_date, EXTRACT(YEAR FROM age(birth_date)) AS years, EXTRACT(MONTH FROM age(birth_date)) AS months, EXTRACT(DAY FROM age(birth_date)) AS days FROM sample_dates; ``` This query provides a detailed breakdown of each employee's age in years, months, and days. ```text name | birth_date | years | months | days ---------+------------+-------+--------+------ Alice | 1990-03-15 | 34 | 3 | 10 Bob | 1985-11-30 | 38 | 6 | 25 Charlie | 1995-07-22 | 28 | 11 | 3 (3 rows) ``` ## Additional considerations ### Negative intervals The `age()` function can return negative intervals if the end timestamp is earlier than the start timestamp. Be mindful of this when using `age()` in calculations or comparisons. ### Alternative functions - `-` operator — Can be used to subtract two dates or timestamps, returning an interval. This is equivalent to using the `age()` function with two timestamps. - `current_date` — Returns the current date (without the time component). Can be used with the `-` operator to calculate an age or duration. ## Resources - [PostgreSQL documentation: Date/Time Functions and Operators](https://www.postgresql.org/docs/current/functions-datetime.html) - [PostgreSQL documentation: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) --- # Source: https://neon.com/llms/functions-array_agg.txt # Postgres array_agg() function > The document explains the usage of the Postgres `array_agg()` function in Neon, detailing how it aggregates input values into an array, which is useful for handling and manipulating grouped data within the database. ## Source - [Postgres array_agg() function HTML](https://neon.com/docs/functions/array_agg): The original HTML version of this documentation The Postges `array_agg()` function collects values from multiple rows into a single array. It's particularly useful for denormalizing data, creating comma-separated lists, or preparing data for JSON output. For example, you can use it to list all products in a category from a products catalog table or all orders for a customer from an orders table. ## Function signature The `array_agg()` function has this simple form: ```sql array_agg(expression) -> anyarray ``` - `expression`: The value to be aggregated into an array. This can be a column or expression of any data type. ```sql array_agg(expression ORDER BY sort_expression [ASC | DESC] [NULLS { FIRST | LAST }]) -> anyarray ``` - `expression`: The value to be aggregated into an array. - `ORDER BY`: Specifies the order in which the values should be aggregated. - `sort_expression`: The expression to sort by. - `ASC | DESC`: Specifies ascending or descending order (default is ASC). - `NULLS { FIRST | LAST }`: Specifies whether nulls should be first or last in the ordering (default depends on ASC or DESC). ## Example usage Consider an `orders` table with columns `order_id`, `product_id`, and `quantity`. You can use `array_agg()` to list all the product IDs for each order. ```sql WITH orders AS ( SELECT 1 AS order_id, 101 AS product_id, 2 AS quantity UNION ALL SELECT 1, 102, 1 UNION ALL SELECT 2, 103, 3 UNION ALL SELECT 2, 104, 1 UNION ALL SELECT 3, 101, 1 ) SELECT order_id, array_agg(product_id) AS products FROM orders GROUP BY order_id ORDER BY order_id; ``` This query groups the orders by `order_id` and aggregates the `product_id` values into an array for each order. ```text order_id | products ----------+----------- 1 | {101,102} 2 | {103,104} 3 | {101} (3 rows) ``` ## Advanced examples ### Ordered array aggregation You can specify an order for the elements in the resulting array: ```sql WITH employees AS ( SELECT 1 AS emp_id, 'John' AS name, 'SQL' AS skill UNION ALL SELECT 1, 'John', 'Python' UNION ALL SELECT 1, 'John', 'Java' UNION ALL SELECT 2, 'Jane', 'C++' UNION ALL SELECT 2, 'Jane', 'Ruby' ) SELECT emp_id, name, array_agg(skill ORDER BY skill) AS skills FROM employees GROUP BY emp_id, name ORDER BY emp_id; ``` This query aggregates the listed skills for each employee into an alphabetically ordered array. ```text emp_id | name | skills --------+------+------------------- 1 | John | {Java,Python,SQL} 2 | Jane | {C++,Ruby} (2 rows) ``` ### Combining with other aggregate functions `array_agg()` can be used in combination with other aggregate functions: ```sql WITH sales(category, product, price, sale_date) AS ( VALUES ('Electronics', 'Laptop', 1200, '2023-01-15'::date), ('Electronics', 'Smartphone', 800, '2023-01-20'::date), ('Electronics', 'Tablet', 500, '2023-02-10'::date), ('Books', 'Novel', 20, '2023-02-05'::date), ('Books', 'Textbook', 100, '2023-02-15'::date), ('Books', 'Cookbook', 30, '2023-03-01'::date) ) SELECT category, array_agg( (SELECT product || ': ' || SUM(price)::text FROM sales s2 WHERE s2.category = s1.category AND s2.product = s1.product GROUP BY s2.product) ) AS product_sales FROM sales s1 GROUP BY category; ``` This query aggregates products into an array with their total sales, for each category. ```text category | product_sales -------------+-------------------------------------------------- Electronics | {"Laptop: 1200","Smartphone: 800","Tablet: 500"} Books | {"Novel: 20","Textbook: 100","Cookbook: 30"} (2 rows) ``` ### Using array_agg() with DISTINCT You can use `DISTINCT` with `array_agg()` to remove duplicates from the output array: ```sql WITH user_logins AS ( SELECT 1 AS user_id, 'Chrome' AS browser UNION ALL SELECT 1, 'Firefox' UNION ALL SELECT 1, 'Chrome' UNION ALL SELECT 2, 'Safari' UNION ALL SELECT 2, 'Chrome' ) SELECT user_id, array_agg(DISTINCT browser ORDER BY browser) AS browsers_used FROM user_logins GROUP BY user_id; ``` This query creates an array of the browsers used by each user, without duplicates and in alphabetical order. ```text user_id | browsers_used ---------+------------------ 1 | {Chrome,Firefox} 2 | {Chrome,Safari} (2 rows) ``` ## Additional considerations ### Performance implications While `array_agg()` is powerful, it can be memory-intensive for large datasets. The function needs to hold all the aggregated values in memory before creating the final array. For very large result sets, consider using pagination or limiting the number of rows before aggregating. ### NULL handling By default, `array_agg()` includes NULL values in the resulting array. If you want to exclude NULL values, you can use it in combination with `FILTER`: ```sql SELECT array_agg(column_name) FILTER (WHERE column_name IS NOT NULL) FROM table_name; ``` ### Alternative functions - `string_agg()`: Concatenates string values into a single string, separated by a delimiter. - `json_agg()`: Aggregates values into a JSON array. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) - [PostgreSQL documentation: Array Functions and Operators](https://www.postgresql.org/docs/current/functions-array.html) --- # Source: https://neon.com/llms/functions-array_length.txt # Postgres array_length() function > The document details the usage of the Postgres `array_length()` function, which determines the number of elements in a specified dimension of an array, specifically within the context of Neon's database environment. ## Source - [Postgres array_length() function HTML](https://neon.com/docs/functions/array_length): The original HTML version of this documentation The Postgres `array_length()` function is used to determine the length of an array along a specified dimension. It's particularly useful when working with multi-dimensional arrays or when you need to perform operations based on the size of an array. Examples include data analysis where you might need to filter rows based on the number of elements in an array column. Another use case might be application development where you need to validate the size of array inputs since Postgres doesn't natively have a fixed-size array data type. ## Function signature The `array_length()` function has the following signature: ```sql array_length(anyarray, int) -> int ``` - `anyarray`: The input array to measure. - `int`: The array dimension to measure (1-based index). ## Example usage Consider a `products` table with a `categories` column that contains arrays of product categories. We can use `array_length()` to find out how many categories each product belongs to. ```sql WITH products(product_name, categories) AS ( VALUES ('Laptop', ARRAY['Electronics', 'Computers']), ('Coffee Maker', ARRAY['Appliances', 'Kitchen', 'Electronics']), ('Book', ARRAY['Books']) ) SELECT product_name, categories, array_length(categories, 1) AS category_count FROM products; ``` This query returns the product name, the array of categories it is listed in, and the count of categories for each product. ```text product_name | categories | category_count --------------+----------------------------------+---------------- Laptop | {Electronics,Computers} | 2 Coffee Maker | {Appliances,Kitchen,Electronics} | 3 Book | {Books} | 1 (3 rows) ``` ## Advanced examples ### Filter rows based on array length You can use `array_length()` in a `WHERE` clause to filter rows based on the size of an array. ```sql WITH orders(order_id, items) AS ( VALUES (1, ARRAY['Shirt', 'Pants', 'Shoes']), (2, ARRAY['Book']), (3, ARRAY['Laptop', 'Mouse', 'Keyboard', 'Monitor']) ) SELECT * FROM orders WHERE array_length(items, 1) > 2; ``` This query selects all orders that contain more than two items. ```text order_id | items ----------+--------------------------------- 1 | {Shirt,Pants,Shoes} 3 | {Laptop,Mouse,Keyboard,Monitor} (2 rows) ``` ### Use with multi-dimensional arrays `array_length()` can be used with multi-dimensional arrays by specifying the dimension to measure. ```sql WITH matrix AS ( SELECT ARRAY[[1, 2, 3], [4, 5, 6]] AS data ) SELECT array_length(data, 1) AS rows, array_length(data, 2) AS columns, array_length(data, 3) AS depth FROM matrix; ``` This query returns the number of rows and columns in a 2D array. There is no third dimension in this case, so `array_length(data, 3)` returns NULL. ```text rows | columns | depth ------+---------+------- 2 | 3 | (1 row) ``` ### Use in a CHECK constraint You can use `array_length()` in a `CHECK` constraint to enforce a condition based on the size of an array column. For example, consider a table that stores the starting lineup of basketball teams as an array. ```sql CREATE TABLE basketball_team ( team_name TEXT PRIMARY KEY, starting_lineup TEXT[], CONSTRAINT check_starting_lineup CHECK (array_length(starting_lineup, 1) = 5) ); ``` This constraint ensures that the `starting_lineup` array column always contains exactly five elements. ```sql INSERT INTO basketball_team (team_name, starting_lineup) VALUES ('Lakers', ARRAY['LeBron James', 'Anthony Davis', 'Russell Westbrook', 'Carmelo Anthony', 'Dwight Howard']); -- Success INSERT INTO basketball_team (team_name, starting_lineup) VALUES ('Warriors', ARRAY['Stephen Curry', 'Klay Thompson', 'Draymond Green']); -- ERROR: new row for relation "basketball_team" violates check constraint "check_starting_lineup" -- DETAIL: Failing row contains (Warriors, {"Stephen Curry","Klay Thompson","Draymond Green"}). ``` ## Additional considerations ### Null handling `array_length()` returns NULL if the input array is NULL or if the specified dimension does not exist. Always handle potential NULL values in your queries to avoid unexpected results. ### Indexing Note that Postgres array dimensions are indexed starting from 1, not 0. If you specify a dimension less than 1, `array_length()` returns NULL. ```sql SELECT array_length(ARRAY[1, 2, 3], 0); ``` ### Performance implications `array_length()` is generally efficient, but be cautious when using it in `WHERE` clauses on large tables. Consider creating a function index on the array length if you frequently filter based on this condition. ### Alternative functions - `cardinality()` - Returns the total number of elements in an array, or NULL if the array is NULL. It's equivalent to `array_length(anyarray, 1)` for one-dimensional arrays. - `array_dims()` - Returns a text representation of the array's dimensions. - `array_upper()` and `array_lower()` - Return the upper and lower bounds of the specified array dimension. ## Resources - [PostgreSQL documentation: Array Functions and Operators](https://www.postgresql.org/docs/current/functions-array.html) - [PostgreSQL documentation: Arrays](https://www.postgresql.org/docs/current/arrays.html) --- # Source: https://neon.com/llms/functions-array_to_json.txt # Postgres array_to_json() function > The document details the usage of the Postgres `array_to_json()` function within Neon, explaining how to convert PostgreSQL arrays into JSON format for data manipulation and storage. ## Source - [Postgres array_to_json() function HTML](https://neon.com/docs/functions/array_to_json): The original HTML version of this documentation You can use the `array_to_json` function to convert a Postgres array into its `JSON` representation, transforming an array of values into a `JSON` array. This helps facilitate integration with web services, APIs, and web frameworks that heavily rely on `JSON`. ## Function signature ```sql array_to_json(anyarray [, pretty_bool]) ``` Line feeds will be added between dimension 1 elements if `pretty_bool` is true. ## `array_to_json` example Let's consider a scenario where an e-commerce platform stores customer preferences as an array of string values in a `customers` table. **customers** ```sql CREATE TABLE customers ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, preferences TEXT[] ); INSERT INTO customers (name, preferences) VALUES ('John Doe', '{clothing, electronics}'); INSERT INTO customers (name, preferences) VALUES ('Jane Doe', '{books, music, travel}'); ``` ``` id | name | preferences ----+----------+------------------------ 1 | John Doe | {clothing,electronics} 2 | Jane Doe | {books,music,travel} ``` You can use the `array_to_json` function as shown to transform the array of string values into a `JSON` array: ```sql SELECT id, name, array_to_json(preferences) AS json_preferences FROM customers; ``` This query returns the following result: ``` id | name | json_preferences ----+----------+---------------------------- 1 | John Doe | ["clothing","electronics"] 2 | Jane Doe | ["books","music","travel"] ``` ## Advanced examples Let's now take a look at a few advanced examples. ### Use `array_to_json` with `array_agg` Imagine you have an e-commerce website with user's shopping cart items, as shown in the following `cart_items` table: **cart_items** ```sql CREATE TABLE cart_items ( id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL, product_id INTEGER NOT NULL, quantity INTEGER NOT NULL ); INSERT INTO cart_items (user_id, product_id, quantity) VALUES (1, 123, 1), (1, 456, 2), (1, 789, 3); INSERT INTO cart_items (user_id, product_id, quantity) VALUES (2, 123, 2), (2, 456, 3), (2, 789, 4); ``` ``` id | user_id | product_id | quantity ----+---------+------------+---------- 1 | 1 | 123 | 1 2 | 1 | 456 | 2 3 | 1 | 789 | 3 4 | 2 | 123 | 2 5 | 2 | 456 | 3 6 | 2 | 789 | 4 ``` You can utilize `array_to_json` to create a clean and efficient `JSON` representation of the cart contents for a specific user. In the example below, the `row_to_json` function converts each row of the result set into a `JSON` object. The `array_agg` function is an aggregate function that aggregates multiple values into an array. It is used here to aggregate the `JSON` objects created by `row_to_json` into a `JSON` array. ```sql SELECT array_to_json( array_agg(row_to_json(t)) ) AS items FROM ( SELECT product_id, quantity FROM cart_items WHERE user_id = 1 ) t; ``` This query returns the following result: ```shell items --------------------------------------------------------------------------------------------------- [{"product_id":123,"quantity":1},{"product_id":456,"quantity":2},{"product_id":789,"quantity":3}] ``` And this is the resulting `JSON` structure: ```json [ { "product_id": 123, "quantity": 1 }, { "product_id": 456, "quantity": 2 }, { "product_id": 789, "quantity": 3 } ] ``` ### Handling `NULL` in `array_to_json` The `array_to_json` function handles `NULL` values gracefully, representing them as `JSON` `null` within the resulting array. Let's consider a `survey_responses` table representing a survey where each participant can provide multiple responses to different questions. Some participants may not answer all questions, leading to `NULL` values in the data. ```sql CREATE TABLE survey_responses ( participant_id SERIAL PRIMARY KEY, participant_name VARCHAR(50), responses VARCHAR(50)[] ); -- Insert sample data with NULL responses INSERT INTO survey_responses (participant_name, responses) VALUES ('Participant A', ARRAY['Yes', 'No', 'Maybe']), ('Participant B', ARRAY['Yes', NULL, 'No']), ('Participant C', ARRAY[NULL, 'No', 'Yes']), ('Participant D', ARRAY['Yes', 'No', NULL]); ``` ``` participant_id | participant_name | responses ----------------+------------------+---------------- 1 | Participant A | {Yes,No,Maybe} 2 | Participant B | {Yes,NULL,No} 3 | Participant C | {NULL,No,Yes} 4 | Participant D | {Yes,No,NULL} ``` The output correctly represents `NULL` values as `JSON` `null` in the `responses_json` array. ```sql SELECT participant_id, participant_name, array_to_json(COALESCE(responses, ARRAY[]::VARCHAR[])) AS responses_json FROM survey_responses; ``` This query returns the following result: ``` participant_id | participant_name | responses_json ---------------+-----------------=+--------------------- 1 | Participant A | ["Yes","No","Maybe"] 2 | Participant B | ["Yes",null,"No"] 3 | Participant C | [null,"No","Yes"] 4 | Participant D | ["Yes","No",null] ``` ## Additional considerations This section outlines additional considerations when using the `array_to_json` function. ### JSON functions In scenarios where more control over the `JSON` structure is required, consider using the `json_build_array` and `json_build_object` functions. These functions allow for a more fine-grained construction of `JSON` objects and arrays. ### Formatting `array_to_json` output with `pretty_bool` The `pretty_bool` parameter, when set to `true`, instructs `array_to_json` to format the output with indentation and line breaks for improved readability. Execute the earlier query with `pretty_bool` as `true`: ```sql SELECT array_to_json( array_agg(row_to_json(t)), true ) AS items FROM ( select product_id, quantity from cart_items WHERE user_id = 1 ) t; ``` This query returns the following result: ``` items ----------------------------------- [{"product_id":123,"quantity":1},+ {"product_id":456,"quantity":2},+ {"product_id":789,"quantity":3}] ``` **Note**: The output displayed in `psql` might be truncated or wrap long lines for visual clarity. ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) ß --- # Source: https://neon.com/llms/functions-avg.txt # Postgres avg() function > The document explains the usage of the Postgres avg() function within Neon, detailing its syntax and application for calculating the average value of a numeric column in a database query. ## Source - [Postgres avg() function HTML](https://neon.com/docs/functions/avg): The original HTML version of this documentation The Postgres `avg()` function calculates the arithmetic mean of a set of numeric values. This function is particularly useful when you need to understand typical values in a dataset, compare different groups, or identify trends over time. For example, you might use it to calculate the average order value for an e-commerce platform, the average response time for a web service, or the mean of sensor readings over time. ## Function signature The `avg()` function has the simple form: ```sql avg(expression) -> numeric type ``` - `expression`: Any numeric expression or column name whose average you want to calculate. The `avg()` function returns an output of the type `numeric` when applied to integer or numeric values. When used with floating-point values, the output type is `double precision`. ## Example usage Consider a table `weather_data` tracking the temperature readings for different cities. It has the columns `date`, `city` and `temperature`. We will use the `avg()` function to analyze this data. ```sql CREATE TABLE weather_data ( date DATE, city TEXT, temperature NUMERIC ); INSERT INTO weather_data (date, city, temperature) VALUES ('2024-03-01', 'New York', 5.5), ('2024-03-01', 'Los Angeles', 22.0), ('2024-03-01', 'Chicago', 2.0), ('2024-03-02', 'New York', 7.0), ('2024-03-02', 'Los Angeles', 23.5), ('2024-03-02', 'Chicago', 3.5), ('2024-03-03', 'New York', 6.5), ('2024-03-03', 'Los Angeles', 21.5), ('2024-03-03', 'Chicago', 1.0); ``` ### Calculating the average temperature To calculate the average temperature reading across all cities and dates, you can use the following query: ```sql SELECT avg(temperature) AS avg_temperature FROM weather_data; ``` This query computes the average of all values in the `temperature` column. ```text avg_temperature --------------------- 10.2777777777777778 (1 row) ``` ### Calculating the average temperature by city You can use `avg()` with a `GROUP BY` clause to calculate averages for different cities: ```sql SELECT city, avg(temperature) AS avg_temperature FROM weather_data GROUP BY city ORDER BY avg_temperature DESC; ``` This query returns the average temperature recorded for each city, ordered by the highest average temperature: ```text city | avg_temperature -------------+--------------------- Los Angeles | 22.3333333333333333 New York | 6.3333333333333333 Chicago | 2.1666666666666667 (3 rows) ``` ## Advanced examples ### Using avg() with a FILTER clause Postgres allows you to use a `FILTER` clause with aggregate functions to selectively include rows in the calculation: ```sql SELECT city, avg(temperature) as avg_temperature, avg(temperature) FILTER (WHERE date >= '2024-03-03') AS avg_temperature_since_3rd FROM weather_data GROUP BY city; ``` This query calculates the average temperature for each city and the average temperature since March 3rd, 2024. ```text city | avg_temperature | avg_temperature_since_3rd -------------+---------------------+--------------------------- Chicago | 2.1666666666666667 | 1.00000000000000000000 Los Angeles | 22.3333333333333333 | 21.5000000000000000 New York | 6.3333333333333333 | 6.5000000000000000 (3 rows) ``` ### Using avg() in a subquery You can use `avg()` in a subquery to compare individual values against the average: ```sql WITH temp_diff AS ( SELECT date, city, temperature, temperature - (SELECT avg(temperature) FROM weather_data) AS temp_diff_from_avg FROM weather_data ) SELECT * FROM temp_diff ORDER BY abs(temp_diff_from_avg) DESC LIMIT 5; ``` This query calculates the difference between each temperature reading and the overall average temperature, and returns the top 5 records with the largest deviations: ```text date | city | temperature | temp_diff_from_avg ------------+-------------+-------------+--------------------- 2024-03-02 | Los Angeles | 23.5 | 13.2222222222222222 2024-03-01 | Los Angeles | 22.0 | 11.7222222222222222 2024-03-03 | Los Angeles | 21.5 | 11.2222222222222222 2024-03-02 | New York | 7.0 | -3.2777777777777778 2024-03-03 | New York | 6.5 | -3.7777777777777778 (5 rows) ``` ### Calculating a moving average We can use `avg()` as a window function to calculate a moving average over the specified window of rows. ```sql SELECT date, city, temperature, avg(temperature) OVER ( PARTITION BY city ORDER BY date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW ) AS moving_avg_temp FROM weather_data ORDER BY city, date; ``` This query calculates a 3-day moving average of temperature readings for each city, alongside the current temperature: ```text date | city | temperature | moving_avg_temp ------------+-------------+-------------+--------------------- 2024-03-01 | Chicago | 2.0 | 2.0000000000000000 2024-03-02 | Chicago | 3.5 | 2.7500000000000000 2024-03-03 | Chicago | 1.0 | 2.1666666666666667 2024-03-01 | Los Angeles | 22.0 | 22.0000000000000000 2024-03-02 | Los Angeles | 23.5 | 22.7500000000000000 2024-03-03 | Los Angeles | 21.5 | 22.3333333333333333 2024-03-01 | New York | 5.5 | 5.5000000000000000 2024-03-02 | New York | 7.0 | 6.2500000000000000 2024-03-03 | New York | 6.5 | 6.3333333333333333 (9 rows) ``` ## Additional considerations ### Handling NULL values The `avg()` function automatically ignores NULL values in its calculations. If all values are NULL, it returns NULL. ### Precision and rounding The `avg()` function returns a numeric value with the maximum precision and scale of any argument. You may want to use the `round()` function to control the number of decimal places in the result: ```sql SELECT round(avg(temperature), 2) AS avg_temperature FROM weather_data; ``` ### Performance implications When working with large datasets, calculating averages can be resource-intensive, especially when combined with complex `GROUP BY` clauses or subqueries. Consider using materialized views or pre-aggregating data for frequently used averages for analytics applications. ## Alternative functions - `percentile_cont()`: Calculates a continuous percentile value. It can be used to compute the median or other percentiles. Note that it is an ordered-set aggregate function and requires a `WITHIN GROUP` clause. - `mode()`: Returns the most frequent value in a set. It is also an ordered-set aggregate function. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) - [PostgreSQL documentation: Mathematical Functions and Operators](https://www.postgresql.org/docs/current/functions-math.html) --- # Source: https://neon.com/llms/functions-concat.txt # Postgres concat() function > The document explains the usage of the Postgres `concat()` function within Neon, detailing how to concatenate multiple strings into a single string in SQL queries. ## Source - [Postgres concat() function HTML](https://neon.com/docs/functions/concat): The original HTML version of this documentation The `concat()` function in Postgres is used to concatenate two or more strings into a single string. It is a variadic function, meaning it can accept any number of arguments. It is useful for combining data from multiple columns, generating custom identifiers or labels, or constructing dynamic SQL statements. ## Function signature The `concat()` function has two forms: ```sql concat(str "any" [, str "any" [, ...] ]) → text ``` - `str`: The strings/values to concatenate. Numeric values are automatically converted to strings, while `NULL` values are treated as empty strings. ```sql concat(variadic str "any"[]) → text ``` - `variadic str`: An array of strings/values to concatenate. This form is useful when you have an array of strings to concatenate. ## Example usage Consider a table `customers` with `first_name` and `last_name` columns. We can use `concat()` to combine these into a full name. ```sql WITH customers AS ( SELECT 'John' AS first_name, 'Doe' AS last_name UNION ALL SELECT 'Jane' AS first_name, 'Smith' AS last_name ) SELECT concat(first_name, ' ', last_name) AS full_name FROM customers; ``` This query concatenates the `first_name`, a space character, and the `last_name` to generate the `full_name`. ```text full_name ------------- John Doe Jane Smith (2 rows) ``` We can concatenate more than two strings by providing additional arguments. ```sql WITH products AS ( SELECT 'Laptop' AS name, 'A' AS variant, 100 AS price UNION ALL SELECT 'Kindle' AS name, NULL AS variant, 200 AS price UNION ALL SELECT 'Table' AS name, 'C' AS variant, 300 AS price ) SELECT concat(name, CASE WHEN variant IS NOT NULL THEN ' - Variant ' ELSE '' END, variant, ' ($', price, ')') AS product_info FROM products; ``` This query generates a descriptive `product_info` string by concatenating the `name`, `variant`, and `price` columns along with some constant text. We used a `CASE` statement to conditionally include the variant in the output. ```text product_info --------------------------- Laptop - Variant A ($100) Kindle ($200) Table - Variant C ($300) (3 rows) ``` ## Advanced examples ### Concatenate an array of strings You can use the `variadic` form of `concat()` to concatenate an array of strings. ```sql WITH data AS ( SELECT ARRAY['apple', 'banana', 'cherry'] AS fruits ) SELECT concat(variadic fruits) AS fruit_string FROM data; ``` This query concatenates the elements of the `fruits` array into a single string. ```text fruit_string ---------------- applebananacherry (1 row) ``` ### Concatenate columns to generate custom keys `concat()` can be used to generate custom identifiers as keys, which you can use for further processing or analysis. ```sql WITH page_interactions AS ( SELECT 1 AS user_id, '/home' AS page, '2023-06-01 10:00:00' AS ts UNION ALL SELECT 1 AS user_id, '/products' AS page, '2023-06-01 10:30:00' AS ts UNION ALL SELECT 2 AS user_id, '/home' AS page, '2023-06-01 11:00:00' AS ts UNION ALL SELECT 1 AS user_id, '/home' AS page, '2023-06-01 12:00:00' AS ts ) SELECT unique_visit, count(*) AS num_interactions FROM ( SELECT ts, concat(user_id, ':', page) AS unique_visit FROM page_interactions ) GROUP BY unique_visit; ``` This query generates a unique identifier for each page visit by concatenating the `user_id` and `page` columns. We then count the number of interactions for each unique visit. ```text unique_visit | num_interactions --------------+------------------ 1:/home | 2 2:/home | 1 1:/products | 1 (3 rows) ``` ## Additional considerations ### Handling NULL values Any null arguments to `concat()` are treated as empty strings in the output. This is in contrast to the behavior of the `||` operator, which treats `NULL` values as `NULL`. ```sql SELECT concat('Hello', NULL, 'World') AS join_concat, 'Hello' || NULL || 'World' AS join_operator; ``` Pick the right function based on how you want to handle `NULL` values. ```text join_concat | join_operator -------------+--------------- HelloWorld | (1 row) ``` ### Alternative functions - `concat_ws`: Concatenates strings with a separator string between each element. - `string_agg`: An aggregation function that combines strings from a column into a single string with a separator. - `||` operator: Can also be used to concatenate strings. It treats `NULL` values differently than `concat()`. ## Resources - [PostgreSQL documentation: String functions](https://www.postgresql.org/docs/current/functions-string.html) --- # Source: https://neon.com/llms/functions-count.txt # Postgres COUNT() function > The document explains the usage and syntax of the Postgres COUNT() function within Neon, detailing how to count rows in a database query effectively. ## Source - [Postgres COUNT() function HTML](https://neon.com/docs/functions/count): The original HTML version of this documentation The Postgres `COUNT()` function counts the number of rows in a result set or the number of non-null values in a specific column. It's useful for data analysis, reporting, and understanding the size and composition of your datasets. Some common use cases include calculating the total number of records in a table, finding the number of distinct values in a column, or determining how many rows meet certain conditions. ## Function signatures The `COUNT()` function has two main forms: ```sql COUNT(*) -> bigint ``` - Counts the total number of rows in the result set. ```sql COUNT([DISTINCT] expression) -> bigint ``` - Counts the number of rows where the input expression is not NULL. - `DISTINCT` is an optional keyword, that removes duplicate values before counting. ## Example usage Consider an `orders` table that tracks orders placed by customers of an online store. It has columns `order_id`, `customer_id`, `product_id`, and `order_date`. We'll use the `COUNT()` function to analyze this data. ```sql CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, product_id INTEGER, order_amount DECIMAL(10, 2) NOT NULL, order_date TIMESTAMP NOT NULL ); INSERT INTO orders (customer_id, product_id, order_amount, order_date) VALUES (1, 101, 150.00, '2023-01-15 10:30:00'), (2, 102, 75.50, '2023-01-16 11:45:00'), (1, 103, 200.00, '2023-02-01 09:15:00'), (3, 104, 50.25, '2023-02-10 14:20:00'), (2, 105, 125.75, '2023-03-05 16:30:00'), (4, NULL, 90.00, '2023-03-10 13:00:00'), (1, 106, 180.50, '2023-04-02 11:10:00'), (3, 107, 60.25, '2023-04-15 10:45:00'), (5, 108, 110.00, '2023-05-01 15:20:00'), (2, 109, 95.75, '2023-05-20 12:30:00'); ``` ### Count all rows To get the total number of orders, you can use `COUNT(*)`: ```sql SELECT COUNT(*) AS total_orders FROM orders; ``` This query will return the total number of rows in the `orders` table. ```text total_orders -------------- 10 (1 row) ``` ### Count non-null values To count how many orders have a `product_id` (assuming some orders might not have a product associated): ```sql SELECT COUNT(product_id) AS orders_with_product FROM orders; ``` This query will return the number of orders where `product_id` is not NULL. ```text orders_with_product --------------------- 9 (1 row) ``` ### Count distinct values To find out how many unique customers have placed orders: ```sql SELECT COUNT(DISTINCT customer_id) AS unique_customers FROM orders; ``` This query will return the number of distinct `customer_id` values in the `orders` table. ```text unique_customers ------------------ 5 (1 row) ``` ## Advanced examples We use the `orders` table created in the previous section to demonstrate more use cases of the `COUNT()` function. ### Combine COUNT() with GROUP BY You can use `COUNT()` with `GROUP BY` to get counts for different categories: ```sql SELECT DATE_TRUNC('month', order_date) AS month, COUNT(*) AS orders_per_month FROM orders GROUP BY DATE_TRUNC('month', order_date) ORDER BY month; ``` This query counts the number of orders for each month. ```text month | orders_per_month ---------------------+------------------ 2023-01-01 00:00:00 | 2 2023-02-01 00:00:00 | 2 2023-03-01 00:00:00 | 2 2023-04-01 00:00:00 | 2 2023-05-01 00:00:00 | 2 (5 rows) ``` ### Use COUNT() in a subquery You can use `COUNT()` in a subquery to filter based on counts: ```sql SELECT customer_id, COUNT(*) AS order_count FROM orders GROUP BY customer_id HAVING COUNT(*) > ( SELECT AVG(order_count) FROM ( SELECT COUNT(*) AS order_count FROM orders GROUP BY customer_id ) AS customer_order_counts ); ``` This query finds customers who have placed more orders than the average number of orders per customer. ```text customer_id | order_count -------------+------------- 2 | 3 1 | 3 (2 rows) ``` ### Combine COUNT() with CASE You can use `COUNT()` with `CASE` statements to only count rows that meet specific conditions: ```sql SELECT COUNT(*) AS total_orders, COUNT(CASE WHEN order_amount > 100 THEN 1 END) AS high_value_orders, COUNT(CASE WHEN order_amount <= 100 THEN 1 END) AS low_value_orders FROM orders; ``` This query counts the total number of orders, as well as the number of high-value and low-value orders. ```text total_orders | high_value_orders | low_value_orders --------------+-------------------+------------------ 10 | 5 | 5 (1 row) ``` ### Use COUNT() with FILTER clause Postgres also allows using a `FILTER` clause with aggregate functions, which can be more readable than `CASE` statements: ```sql SELECT COUNT(*) AS total_orders, COUNT(*) FILTER (WHERE order_date >= '2023-04-01') AS recent_orders FROM orders; ``` This query counts the total number of orders, as well as the number of orders placed after April 1, 2023. ```text total_orders | recent_orders --------------+--------------- 10 | 4 (1 row) ``` ## Additional considerations ### Performance implications `COUNT(*)` is generally faster than `COUNT(column)` or `COUNT(DISTINCT column)` because it doesn't need to check for NULL values or uniqueness. However, on very large tables, even `COUNT(*)` can be slow if it needs to scan the entire table. For frequently used counts, consider maintaining a separate counter table or using materialized views to improve performance. ### NULL handling Both `COUNT(column)` and `COUNT(DISTINCT column)` expressions do not count NULL values. If you need to include NULL values in your count, use `COUNT(*)` or `COUNT(COALESCE(column, 0))`. ### Alternative approaches - For approximate counts of distinct values in very large datasets, consider using the `pg_stat_statements` extension or the `HyperLogLog` algorithm (available through extensions like `postgresql-hll`). - For faster counts on large tables, consider using estimate counts based on table statistics with `pg_class.reltuples`. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) - [PostgreSQL documentation: FILTER Clause for Aggregate Functions](https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES) --- # Source: https://neon.com/llms/functions-current_timestamp.txt # Postgres current_timestamp() function > The document details the usage of the Postgres `current_timestamp()` function within Neon, explaining its syntax and behavior for retrieving the current date and time in SQL queries. ## Source - [Postgres current_timestamp() function HTML](https://neon.com/docs/functions/current_timestamp): The original HTML version of this documentation The Postgres `current_timestamp()` function returns the current date and time with timezone. The `now()` function is an alias. This function is particularly useful for timestamping database entries, calculating time differences, or implementing time-based business logic. For example, you can use it to record the time a user logs in, or when the status of a purchase order changes. Fetching the current time information can also be used to calculate time-based metrics and schedule periodic tasks. ## Function signature The `current_timestamp()` function has two forms: ```sql current_timestamp -> timestamp with timezone ``` This form returns the current timestamp with timezone at the start of the current transaction. Note that there are no parentheses in this form. ```sql current_timestamp(precision) -> timestamp with timezone ``` - `precision` (optional): An integer specifying the number of fractional digits in the seconds field. It can range from 0 to 6. If omitted, the result has the full available precision. ## Example usage Let's consider a table called `user_logins` that tracks user login activity. We can use `current_timestamp` to record the exact time a user logs in. ```sql CREATE TABLE user_logins ( user_id INT, login_time TIMESTAMP WITH TIME ZONE ); ``` This `INSERT` query adds a new login record with the current timestamp. ```sql INSERT INTO user_logins (user_id, login_time) VALUES (1, current_timestamp); SELECT * FROM user_logins; ``` The `SELECT` query retrieves the login record, showing the user ID and the timestamp of the login. ```text user_id | login_time ---------+------------------------------ 1 | 2024-06-25 07:31:32.85829+00 (1 row) ``` We can also specify `current_timestamp` as the default value for a timestamp column when creating the table. For example, consider the query below, where we set up a table to track purchase orders and add some records: ```sql CREATE TABLE purchase_orders ( order_id SERIAL PRIMARY KEY, order_date TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp ); INSERT INTO purchase_orders (order_id) VALUES (1); INSERT INTO purchase_orders (order_id) VALUES (2); ``` This query creates a table to store purchase orders, with the `order_date` column set to the current timestamp by default. When inserting new records, the `order_date` column will automatically be populated with the current timestamp. ```sql SELECT * FROM purchase_orders; ``` This query retrieves all purchase orders, showing the order ID and the timestamp when each order was created. ```text order_id | order_date ----------+------------------------------- 1 | 2024-06-25 07:39:15.241256+00 2 | 2024-06-25 07:39:15.307045+00 (2 rows) ``` ## Advanced examples ### Use `current_timestamp` to query recent data We can use `current_timestamp` in a `SELECT` statement to compare with stored timestamps and fetch recent records. For example, to retrieve all login records from the past 6 hours, you can use `current_timestamp` in the `WHERE` clause: ```sql WITH user_logins(user_id, login_time) AS ( VALUES (1, current_timestamp - INTERVAL '2 hours'), (2, current_timestamp - INTERVAL '12 hours'), (3, current_timestamp - INTERVAL '23 hours'), (4, current_timestamp - INTERVAL '1 day 2 hours'), (5, current_timestamp - INTERVAL '30 minutes'), (1, current_timestamp - INTERVAL '45 minutes'), (2, current_timestamp - INTERVAL '18 hours'), (6, current_timestamp - INTERVAL '5 minutes') ) SELECT user_id, login_time, current_timestamp - login_time AS time_since_login FROM user_logins WHERE login_time > current_timestamp - INTERVAL '6 hours'; ``` This query retrieves all logins from the past 6 hours and calculates how long ago each login occurred. ```text user_id | login_time | time_since_login ---------+-------------------------------+------------------ 1 | 2024-06-25 05:48:53.094862+00 | 02:00:00 5 | 2024-06-25 07:18:53.094862+00 | 00:30:00 1 | 2024-06-25 07:03:53.094862+00 | 00:45:00 6 | 2024-06-25 07:43:53.094862+00 | 00:05:00 (4 rows) ``` ### Specify timestamp precision for `current_timestamp` You can specify the precision of the timestamp when needed: ```sql SELECT current_timestamp(3) AS ts_with_milliseconds, current_timestamp(6) AS ts_with_microseconds, current_timestamp(0) AS ts_without_fraction; ``` This query computes the current timestamp value with different levels of precision: milliseconds, microseconds, and without fractional seconds. ```text ts_with_milliseconds | ts_with_microseconds | ts_without_fraction ----------------------------+-------------------------------+------------------------ 2024-06-25 07:52:14.903+00 | 2024-06-25 07:52:14.903483+00 | 2024-06-25 07:52:15+00 (1 row) ``` ### Use `current_timestamp` with triggers You can use `current_timestamp` in combination with a default value and an update trigger to automatically maintain creation and modification timestamps for records. For example, run the following query to create a table storing articles for a blog: ```sql CREATE TABLE articles ( id SERIAL PRIMARY KEY, title TEXT, content TEXT, created_at TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp(3), updated_at TIMESTAMP WITH TIME ZONE DEFAULT current_timestamp(3) ); CREATE OR REPLACE FUNCTION update_modified_column() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = current_timestamp(3); RETURN NEW; END; $$ language 'plpgsql'; CREATE TRIGGER update_article_modtime BEFORE UPDATE ON articles FOR EACH ROW EXECUTE FUNCTION update_modified_column(); INSERT INTO articles (title, content) VALUES ('First Article', 'Content here'); INSERT INTO articles (title, content) VALUES ('Second Article', 'Content here'); ``` This query creates a table to store articles, with columns for the title, content, and creation and update timestamps. It also defines a trigger that updates the `updated_at` column whenever an article is modified. To verify, run the following query that updates the content for the first article: ```sql SELECT pg_sleep(1); -- simulate some delay before update UPDATE articles SET content = 'Updated content' WHERE id = 1; SELECT * FROM articles; ``` This query returns the following output, showing the updated content and the update timestamp for the first article: ```text id | title | content | created_at | updated_at ----+----------------+-----------------+----------------------------+---------------------------- 2 | Second Article | Content here | 2024-06-25 08:04:50.343+00 | 2024-06-25 08:04:50.343+00 1 | First Article | Updated content | 2024-06-25 08:04:50.277+00 | 2024-06-25 08:04:57.297+00 (2 rows) ``` ## Additional considerations ### Timezone awareness `current_timestamp` returns a value in the timezone of the current session, which defaults to the server's timezone unless explicitly set in the session. This is important to note when working with timestamps across different timezones. ### Alternative functions - `now()` - An alias for `current_timestamp`. - `transaction_timestamp()` - Returns the current timestamp at the start of the current transaction. Equivalent to `current_timestamp`. - `statement_timestamp()` - Returns the current timestamp at the start of the current statement. - `clock_timestamp()` - Returns the current timestamp, changing even within a single SQL statement. ## Resources - [PostgreSQL documentation: Date/Time Functions and Operators](https://www.postgresql.org/docs/current/functions-datetime.html) - [PostgreSQL documentation: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) --- # Source: https://neon.com/llms/functions-date_trunc.txt # Postgres date_trunc() function > The document explains the usage of the Postgres `date_trunc()` function within Neon, detailing how it truncates timestamps to specified precision levels, aiding in precise time-based data manipulation. ## Source - [Postgres date_trunc() function HTML](https://neon.com/docs/functions/date_trunc): The original HTML version of this documentation The Postgres `date_trunc()` function truncates a timestamp or interval to a specified precision. This function is particularly useful for grouping time-series data and performing time-based calculations. For example, it can be used to generate monthly reports, analyze hourly trends, or group events by time period. ## Function signature The `date_trunc()` function has the following form: ```sql date_trunc(field, source [, time_zone ]) -> timestamp / interval ``` - `field`: A string literal specifying the precision to which to truncate the input value. Valid values include `microseconds`, `milliseconds`, `second`, `minute`, `hour`, `day`, `week`, `month`, `quarter`, `year`, `decade`, `century`, and `millennium`. - `source`: The timestamp or interval value to be truncated. - `time_zone` (optional): The timezone in which to perform the truncation. Otherwise, the default timezone is used. The function returns a timestamp or interval value truncated to the specified precision, i.e., fields less significant than the specified precision are set to zero. ## Example usage Let's consider a table called `sales` that tracks daily sales data. We can use `date_trunc` to group sales by different time periods. ```sql CREATE TABLE sales ( sale_date TIMESTAMP WITH TIME ZONE, amount DECIMAL(10, 2) ); INSERT INTO sales (sale_date, amount) VALUES ('2024-03-01 08:30:00+00', 100.50), ('2024-03-01 14:45:00+00', 200.75), ('2024-03-02 10:15:00+00', 150.25), ('2024-04-15 09:00:00+00', 300.00), ('2024-05-20 16:30:00+00', 250.50); -- Group sales by month SELECT date_trunc('month', sale_date) AS month, SUM(amount) AS total_sales FROM sales GROUP BY date_trunc('month', sale_date) ORDER BY month; ``` This query groups sales by month, summing the total sales for each month. ```text month | total_sales ------------------------+------------- 2024-03-01 00:00:00+00 | 451.50 2024-04-01 00:00:00+00 | 300.00 2024-05-01 00:00:00+00 | 250.50 (3 rows) ``` We can further refine the output by extracting the month and year from the truncated timestamp: ```sql SELECT EXTRACT(YEAR FROM date_trunc('month', sale_date)) AS year, EXTRACT(MONTH FROM date_trunc('month', sale_date)) AS month, SUM(amount) AS total_sales FROM sales GROUP BY year, month ORDER BY year, month; ``` This query groups sales by year and month, providing a more readable output: ```text year | month | total_sales ------+-------+------------- 2024 | 3 | 451.50 2024 | 4 | 300.00 2024 | 5 | 250.50 (3 rows) ``` ## Advanced examples ### Use `date_trunc` with different precisions We can use `date_trunc` with different precision levels to analyze data at each granularity: ```sql WITH sample_data(event_time) AS ( VALUES ('2024-03-15 14:30:45.123456+00'::TIMESTAMP WITH TIME ZONE), ('2024-06-22 09:15:30.987654+00'::TIMESTAMP WITH TIME ZONE), ('2024-11-07 23:59:59.999999+00'::TIMESTAMP WITH TIME ZONE) ) SELECT event_time, date_trunc('year', event_time) AS year_trunc, date_trunc('quarter', event_time) AS quarter_trunc, date_trunc('month', event_time) AS month_trunc, date_trunc('week', event_time) AS week_trunc, date_trunc('day', event_time) AS day_trunc, date_trunc('hour', event_time) AS hour_trunc, date_trunc('minute', event_time) AS minute_trunc, date_trunc('second', event_time) AS second_trunc, date_trunc('millisecond', event_time) AS millisecond_trunc FROM sample_data; ``` This query demonstrates how `date_trunc` works with different precision levels, from year down to millisecond. ```text event_time | year_trunc | quarter_trunc | month_trunc | week_trunc | day_trunc | hour_trunc | minute_trunc | second_trunc | millisecond_trunc -------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+---------------------------- 2024-03-15 14:30:45.123456+00 | 2024-01-01 00:00:00+00 | 2024-01-01 00:00:00+00 | 2024-03-01 00:00:00+00 | 2024-03-11 00:00:00+00 | 2024-03-15 00:00:00+00 | 2024-03-15 14:00:00+00 | 2024-03-15 14:30:00+00 | 2024-03-15 14:30:45+00 | 2024-03-15 14:30:45.123+00 2024-06-22 09:15:30.987654+00 | 2024-01-01 00:00:00+00 | 2024-04-01 00:00:00+00 | 2024-06-01 00:00:00+00 | 2024-06-17 00:00:00+00 | 2024-06-22 00:00:00+00 | 2024-06-22 09:00:00+00 | 2024-06-22 09:15:00+00 | 2024-06-22 09:15:30+00 | 2024-06-22 09:15:30.987+00 2024-11-07 23:59:59.999999+00 | 2024-01-01 00:00:00+00 | 2024-10-01 00:00:00+00 | 2024-11-01 00:00:00+00 | 2024-11-04 00:00:00+00 | 2024-11-07 00:00:00+00 | 2024-11-07 23:00:00+00 | 2024-11-07 23:59:00+00 | 2024-11-07 23:59:59+00 | 2024-11-07 23:59:59.999+00 (3 rows) ``` ### Use `date_trunc` with timezones The `date_trunc` function can be used with specific timezones: ```sql SELECT date_trunc('day', '2024-03-15 23:30:00+00'::TIMESTAMP WITH TIME ZONE) AS utc_trunc, date_trunc('day', '2024-03-15 23:30:00+00'::TIMESTAMP WITH TIME ZONE, 'America/New_York') AS ny_trunc, date_trunc('day', '2024-03-15 23:30:00+00'::TIMESTAMP WITH TIME ZONE, 'Asia/Tokyo') AS tokyo_trunc; ``` This query shows how `date_trunc` behaves differently when truncating to the day in different timezones. ```text utc_trunc | ny_trunc | tokyo_trunc ------------------------+------------------------+------------------------ 2024-03-15 00:00:00+00 | 2024-03-15 04:00:00+00 | 2024-03-15 15:00:00+00 (1 row) ``` ### Use `date_trunc` for time-based analysis Below, we use `date_trunc` to analyze user activity patterns for a hypothetical social media application: ```sql CREATE TABLE user_activities ( user_id INT, activity_type VARCHAR(50), activity_time TIMESTAMP WITH TIME ZONE ); INSERT INTO user_activities (user_id, activity_type, activity_time) VALUES (1, 'login', '2024-03-01 08:30:00+00'), (2, 'login', '2024-03-01 12:30:00+00'), (2, 'post', '2024-03-03 09:15:00+00'), (1, 'comment', '2024-03-05 10:45:00+00'), (3, 'login', '2024-03-08 14:00:00+00'), (2, 'logout', '2024-03-08 16:30:00+00'), (1, 'logout', '2024-03-12 18:00:00+00'), (3, 'post', '2024-03-15 19:30:00+00'), (3, 'logout', '2024-03-18 20:45:00+00'); -- Analyze daily activity pattern SELECT date_trunc('day', activity_time) AS day, activity_type, COUNT(*) AS activity_count FROM user_activities GROUP BY date_trunc('day', activity_time), activity_type ORDER BY day, activity_type; ``` This query uses `date_trunc` to group user activities by each day. ```text day | activity_type | activity_count ------------------------+---------------+---------------- 2024-03-01 00:00:00+00 | login | 2 2024-03-03 00:00:00+00 | post | 1 2024-03-05 00:00:00+00 | comment | 1 2024-03-08 00:00:00+00 | login | 1 2024-03-08 00:00:00+00 | logout | 1 2024-03-12 00:00:00+00 | logout | 1 2024-03-15 00:00:00+00 | post | 1 2024-03-18 00:00:00+00 | logout | 1 (8 rows) ``` ### Use `date_trunc` with interval types The `date_trunc` function can also be used with interval data: ```sql SELECT date_trunc('hour', INTERVAL '2 days 3 hours 40 minutes') AS truncated_interval, date_trunc('day', '2024-03-15 23:30:00+00'::TIMESTAMPTZ - '2023-09-14 11:20:00+00'::TIMESTAMPTZ) AS truncated_day; ``` This query truncates the first interval to the nearest hour, while the second column truncates the difference between two timestamps to the nearest day. ```text truncated_interval | truncated_day --------------------+--------------- 2 days 03:00:00 | 183 days (1 row) ``` ## Additional considerations ### Timezone awareness When using `date_trunc` with timestamps, the function uses the default timezone of the session, or that specified in the input. As shown in the previous section, the truncation result can vary depending on the timezone. ### Truncating intervals When truncating intervals, the `date_trunc` function rounds the interval to the nearest value based on the specified precision. However, note that the output might not be intuitive and depends on how the interval is defined. For example, the query below attempts to truncate a month from an interval specified as some number of days. ```sql SELECT date_trunc('month', '183 days'::INTERVAL) AS colA, date_trunc('month', '2 years 3 months'::INTERVAL) AS colB; ``` This query outputs the following: ```text cola | colb ----------+---------------- 00:00:00 | 2 years 3 mons (1 row) ``` The first input interval didn't have a month component, so even with the number of days being bigger than a month, the output is zero. The second input interval has a month component, so the output is the input interval truncated to the month. ### Performance considerations When using `date_trunc` in WHERE clauses or for grouping large datasets, consider creating an index on the truncated values to improve query performance: ```sql CREATE INDEX idx_sales_month ON sales (date_trunc('month', sale_date)); ``` This creates an index on the monthly truncated sale dates, which can speed up queries that group or filter by month. ## Resources - [PostgreSQL documentation: Date/Time Functions and Operators](https://www.postgresql.org/docs/current/functions-datetime.html) - [PostgreSQL documentation: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) --- # Source: https://neon.com/llms/functions-dense_rank.txt # Postgres dense_rank() function > The document explains the usage of the Postgres `dense_rank()` function within Neon, detailing how it assigns ranks to rows in a result set without gaps in ranking values. ## Source - [Postgres dense_rank() function HTML](https://neon.com/docs/functions/dense_rank): The original HTML version of this documentation You can use the `dense_rank` function to assign a rank to each distinct row within a result set. It provides a non-gapped ranking of values which is particularly useful when dealing with datasets where ties need to be acknowledged without leaving gaps in the ranking sequence. ## Function signature ```sql dense_rank() OVER ( [PARTITION BY partition_expression, ... ] ORDER BY sort_expression [ASC | DESC], ... ) ``` ## `dense_rank` example Let's say we have a `student_scores` table of students along with their name and score: ```sql CREATE TABLE student_scores ( student_id SERIAL PRIMARY KEY, student_name VARCHAR(50) NOT NULL, score INT NOT NULL ); INSERT INTO student_scores (student_name, score) VALUES ('Alice', 85), ('Bob', 92), ('Charlie', 78), ('David', 92), ('Eve', 85), ('Frank', 78); ``` **student_scores** ``` | student_id | student_name | score | |------------|--------------|-------| | 1 | Alice | 85 | | 2 | Bob | 92 | | 3 | Charlie | 78 | | 4 | David | 92 | | 5 | Eve | 85 | | 6 | Frank | 78 | ``` You can use `dense_rank` to assign a rank to each row in the result set: ```sql SELECT student_id, student_name, score, dense_rank() OVER (ORDER BY score DESC) AS rank FROM student_scores; ``` This query returns the following values: ``` | student_id | student_name | score | rank | |------------|--------------|-------|------| | 2 | Bob | 92 | 1 | | 4 | David | 92 | 1 | | 1 | Alice | 85 | 2 | | 5 | Eve | 85 | 2 | | 3 | Charlie | 78 | 3 | | 6 | Frank | 78 | 3 | ``` ## Advanced examples This section shows advanced usage examples for the `dense_rank` function. ### `dense_rank` with `PARTITION BY` and `ORDER BY` clause Let's modify the previous example to include a `class_id` column to represent different classes: **student_scores_by_class** ```sql CREATE TABLE student_scores_by_class ( student_id SERIAL PRIMARY KEY, student_name VARCHAR(50) NOT NULL, score INT NOT NULL, class_id INT NOT NULL ); INSERT INTO student_scores_by_class (student_name, score, class_id) VALUES ('Alice', 85, 1), ('Bob', 92, 1), ('Charlie', 78, 1), ('David', 92, 2), ('Eve', 85, 2), ('Frank', 78, 2); ``` ``` | student_id | student_name | score | class_id | |------------|--------------|-------|----------| | 1 | Alice | 85 | 1 | | 2 | Bob | 92 | 1 | | 3 | Charlie | 78 | 1 | | 4 | David | 92 | 2 | | 5 | Eve | 85 | 2 | | 6 | Frank | 78 | 2 | ``` The `PARTITION BY` clause below is used in conjunction with ranking function to divide the result set into partitions based on one or more columns. Within each partition, the ranking function operates independently. ```sql SELECT student_id, student_name, score, class_id, dense_rank() OVER (PARTITION BY class_id ORDER BY score DESC) AS rank_within_class FROM student_scores_by_class; ``` This query returns the following values: ``` | student_id | student_name | score | class_id | rank_within_class | |------------|--------------|-------|----------|-------------------| | 2 | Bob | 92 | 1 | 1 | | 1 | Alice | 85 | 1 | 2 | | 3 | Charlie | 78 | 1 | 3 | | 4 | David | 92 | 2 | 1 | | 5 | Eve | 85 | 2 | 2 | | 6 | Frank | 78 | 2 | 3 | ``` This partitions the result set into two groups based on the `class_id` column, and the ranking is performed independently within each class. As a result, students are ranked within their respective classes, and the ranking starts fresh for each class. ### Filter `dense_rank` results in `WHERE` clause To filter on `dense_rank` results in a `WHERE` clause, move the function into a common table expression (CTE). Let's say you want to find the dense rank for the top two scores within each class: ```sql WITH RankedScores AS ( SELECT student_id, student_name, score, class_id, dense_rank() OVER (PARTITION BY class_id ORDER BY score DESC) AS dense_rank FROM student_scores_by_class ) SELECT student_id, student_name, score, class_id, dense_rank FROM RankedScores WHERE dense_rank <= 2; ``` This query returns the following values: ``` | student_id | student_name | score | class_id | dense_rank | |------------|--------------|-------|----------|------------| | 2 | Bob | 92 | 1 | 1 | | 1 | Alice | 85 | 1 | 2 | | 4 | David | 92 | 2 | 1 | | 5 | Eve | 85 | 2 | 2 | ``` ## Additional considerations This section covers additional considerations for the `dense_rank` function. ### How is `dense_rank` different from the `rank` function? The `rank` function assigns a unique rank to each distinct row in the result set and leaves gaps in the ranking sequence when there are ties. If two or more rows have the same values and are assigned the same rank, the next rank will be skipped. ```sql SELECT student_id, student_name, score, rank() OVER (ORDER BY score DESC) AS rank FROM student_scores; ``` This query returns the following values: ``` | student_id | student_name | score | rank | |------------|--------------|-------|------| | 2 | Bob | 92 | 1 | | 4 | David | 92 | 1 | | 1 | Alice | 85 | 3 | | 5 | Eve | 85 | 3 | | 3 | Charlie | 78 | 5 | | 6 | Frank | 78 | 5 | ``` Alice and Eve, who share the second-highest score, have ranks 3 and 5, and there is a gap in the ranking sequence. When using `dense_rank`, Alice and Eve, who share the second-highest score, both have a rank of 2, and there is no gap in the ranking sequence. ### Aggregations You can combine `dense_rank` with other functions like `COUNT`, `SUM`, `AVG` for aggregations. Use with `COUNT`: ```sql SELECT class_id, dense_rank() OVER (ORDER BY COUNT(*) DESC) AS student_count_rank, COUNT(*) AS student_count FROM student_scores_by_class GROUP BY class_id; ``` This query returns the following values: ```text | class_id | student_count_rank | student_count | |-----------|---------------------|---------------| | 2 | 1 | 3 | | 1 | 1 | 3 | ``` Use with `SUM`: ```sql SELECT class_id, dense_rank() OVER (ORDER BY SUM(score) DESC) AS total_score_rank, SUM(score) AS total_score FROM student_scores_by_class GROUP BY class_id; ``` This query ranks the classes based on their total scores, assigning the highest rank to the class with the highest total score. This query returns the following values: ``` | class_id | total_score_rank | total_score | |-----------|-------------------|-------------| | 2 | 1 | 255 | | 1 | 1 | 255 | ``` Use with `AVG`: ```sql SELECT class_id, dense_rank() OVER (ORDER BY AVG(score) DESC) AS average_score_rank, AVG(score) AS average_score FROM student_scores_by_class GROUP BY class_id; ``` This query ranks the classes based on their average scores, assigning the highest rank to the class with the highest average score. This query returns the following values: ``` | class_id | average_score_rank | average_score | |-----------|---------------------|---------------------| | 2 | 1 | 85.0000000000000000 | | 1 | 1 | 85.0000000000000000 | ``` ### Indexing Creating indexes on the columns specified in the `ORDER BY` (sorting) and `PARTITION BY` (partitioning) clauses can significantly improve performance. In this case, queries on the `student_scores` table would benefit from creating indexes on `class_id` and `score` columns. ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-extract.txt # Postgres extract() function > The document explains the usage of the Postgres `extract()` function within Neon, detailing how to retrieve specific subfields such as year, month, or day from date and time values in a database. ## Source - [Postgres extract() function HTML](https://neon.com/docs/functions/extract): The original HTML version of this documentation The Postgres `extract()` function retrieves specific components (such as year, month, or day) from date/time values where the source is of the type `timestamp`, `date`, `time` or `interval`. This function is particularly useful for data analysis, reporting, and manipulating date and time data. For example, it can be used to group data by year, filter records for specific months, or calculate age based on birth dates. ## Function signature The `extract()` function has the following form: ```sql extract(field FROM source) -> numeric ``` - `field`: A string literal specifying the component to extract. Valid values include `century`, `day`, `decade`, `dow`, `doy`, `epoch`, `hour`, `isodow`, `isoyear`, `microseconds`, `millennium`, `milliseconds`, `minute`, `month`, `quarter`, `second`, `timezone`, `timezone_hour`, `timezone_minute`, `week`, and `year`. - `source`: The date, time, timestamp, or interval value from which to extract the component. The function returns a numeric value representing the extracted component. ## Example usage Let's consider a table called `events` that tracks various events with their timestamps. We can use `extract()` to analyze different aspects of these events. ```sql CREATE TABLE events ( event_id SERIAL PRIMARY KEY, event_name VARCHAR(100), event_timestamp TIMESTAMP WITH TIME ZONE ); INSERT INTO events (event_name, event_timestamp) VALUES ('Conference A', '2024-03-15 09:00:00+00'), ('Workshop B', '2024-06-22 14:30:00+00'), ('Seminar C', '2024-09-10 11:15:00+00'), ('Conference D', '2024-12-05 10:00:00+00'), ('Workshop E', '2025-02-18 13:45:00+00'); -- Extract year and month from event timestamps SELECT event_name, EXTRACT(YEAR FROM event_timestamp) AS event_year, EXTRACT(MONTH FROM event_timestamp) AS event_month FROM events ORDER BY event_timestamp; ``` This query extracts the year and month from each event's timestamp. ```text event_name | event_year | event_month --------------+------------+------------- Conference A | 2024 | 3 Workshop B | 2024 | 6 Seminar C | 2024 | 9 Conference D | 2024 | 12 Workshop E | 2025 | 2 (5 rows) ``` You can use the extracted components for further analysis, filtering, or grouping. For example, we can count the number of events by quarter: ```sql -- Count events by quarter SELECT EXTRACT(YEAR FROM event_timestamp) AS year, EXTRACT(QUARTER FROM event_timestamp) AS quarter, COUNT(*) AS event_count FROM events GROUP BY year, quarter ORDER BY year, quarter; ``` This query groups events by year and quarter, providing a count of events for each period. ```text year | quarter | event_count ------+---------+------------- 2024 | 1 | 1 2024 | 2 | 1 2024 | 3 | 1 2024 | 4 | 1 2025 | 1 | 1 (5 rows) ``` ## Advanced examples ### Use `extract()` with different fields You can use `extract()` with various fields to analyze different components of timestamps: ```sql WITH sample_data(event_time) AS ( VALUES ('2024-03-15 14:30:45.123456+00'::TIMESTAMP WITH TIME ZONE), ('2024-06-22 09:15:30.987654+00'::TIMESTAMP WITH TIME ZONE), ('2024-11-07 23:59:59.999999+00'::TIMESTAMP WITH TIME ZONE) ) SELECT event_time, EXTRACT(CENTURY FROM event_time) AS century, EXTRACT(DECADE FROM event_time) AS decade, EXTRACT(YEAR FROM event_time) AS year, EXTRACT(QUARTER FROM event_time) AS quarter, EXTRACT(MONTH FROM event_time) AS month, EXTRACT(WEEK FROM event_time) AS week, EXTRACT(DAY FROM event_time) AS day, EXTRACT(HOUR FROM event_time) AS hour, EXTRACT(MINUTE FROM event_time) AS minute, EXTRACT(SECOND FROM event_time) AS second, EXTRACT(MILLISECONDS FROM event_time) AS milliseconds, EXTRACT(MICROSECONDS FROM event_time) AS microseconds FROM sample_data; ``` This query demonstrates how `extract()` works with different fields, ranging from `century` to `microseconds`. ```text event_time | century | decade | year | quarter | month | week | day | hour | minute | second | milliseconds | microseconds -------------------------------+---------+--------+------+---------+-------+------+-----+------+--------+-----------+--------------+-------------- 2024-03-15 14:30:45.123456+00 | 21 | 202 | 2024 | 1 | 3 | 11 | 15 | 14 | 30 | 45.123456 | 45123.456 | 45123456 2024-06-22 09:15:30.987654+00 | 21 | 202 | 2024 | 2 | 6 | 25 | 22 | 9 | 15 | 30.987654 | 30987.654 | 30987654 2024-11-07 23:59:59.999999+00 | 21 | 202 | 2024 | 4 | 11 | 45 | 7 | 23 | 59 | 59.999999 | 59999.999 | 59999999 (3 rows) ``` ### Use `extract()` with interval data When working with the `INTERVAL` type, the `extract()` function allows you to pull out specific parts of the interval, such as the number of years, months, days, hours, minutes, seconds, and so on. ```sql SELECT EXTRACT(DAYS FROM INTERVAL '2 years 3 months 15 days') AS days, EXTRACT(HOURS FROM INTERVAL '36 hours 30 minutes') AS hours, EXTRACT(MINUTES FROM INTERVAL '2 hours 45 minutes 30 seconds') AS minutes; ``` This query extracts the specified parts from the interval. Note that the `extract` function extracts only the value for the specified part in the interval. For example, `EXTRACT(DAYS FROM INTERVAL '2 years 3 months 15 days')` returns `15` for days, not the total number of days in the interval. ```text days | hours | minutes ------+-------+--------- 15 | 36 | 45 (1 row) ``` Additionally, it should be noted that for non-normalized intervals, the extracted values may not be as expected. A **normalized interval** automatically converts large units into their equivalent higher units. For example, an interval of `14 months` is normalized to `1 year 2 months` because 12 months make a year. A **non-normalized interval** keeps the units as specified, without converting to higher units. This is useful when you want to keep intervals in the same unit (like months or minutes) for easier manipulation or calculation. When extracting values from non-normalized intervals, Postgres returns the remainder after converting to the next higher unit. This can lead to results that might seem counter-intuitive if you expect direct conversion without accounting for normalization. For example, consider this query and its output: ```sql SELECT EXTRACT(MONTH FROM INTERVAL '32 months') AS months, EXTRACT(MINUTE FROM INTERVAL '80 minutes') AS minutes; ``` ```text months | minutes --------+--------- 8 | 20 (1 row) ``` **Interval '32 months'**: - A year is composed of 12 months. - 32 months can be broken down into 2 years and 8 months (since 32 ÷ 12 = 2 years with a remainder of 8 months). - When you `EXTRACT(MONTH FROM INTERVAL '32 months')`, it returns 8 because that's the remaining months after accounting for the full years. **Interval '80 minutes'**: - An hour is composed of 60 minutes. - 80 minutes can be broken down into 1 hour and 20 minutes (since 80 ÷ 60 = 1 hour with a remainder of 20 minutes). - When you `EXTRACT(MINUTE FROM INTERVAL '80 minutes')`, it returns 20 because that's the remaining minutes after accounting for the full hour. ### Use `extract()` for time-based analysis Let's use `extract()` to analyze user registration patterns for a hypothetical social media application: ```sql CREATE TABLE user_registrations ( user_id SERIAL PRIMARY KEY, username VARCHAR(50), registration_time TIMESTAMP WITH TIME ZONE ); INSERT INTO user_registrations (username, registration_time) VALUES ('user1', '2024-03-15 08:30:00+00'), ('user2', '2024-03-15 08:45:00+00'), ('user3', '2024-03-15 14:20:00+00'), ('user4', '2024-03-16 09:15:00+00'), ('user5', '2024-03-16 09:30:00+00'), ('user6', '2024-03-16 14:30:00+00'), ('user7', '2024-03-17 08:45:00+00'), ('user8', '2024-03-17 14:10:00+00'), ('user9', '2024-03-17 14:25:00+00'), ('user10', '2024-03-17 14:50:00+00'); -- Analyze registration patterns by day of week and hour SELECT EXTRACT(ISODOW FROM registration_time) AS day_of_week, EXTRACT(HOUR FROM registration_time) AS hour_of_day, COUNT(*) AS registration_count FROM user_registrations GROUP BY day_of_week, hour_of_day ORDER BY day_of_week, hour_of_day; ``` This query uses `extract()` to analyze user registration patterns by day of week and hour of day. ```text day_of_week | hour_of_day | registration_count -------------+-------------+-------------------- 5 | 8 | 2 5 | 14 | 1 6 | 9 | 2 6 | 14 | 1 7 | 8 | 1 7 | 14 | 3 (6 rows) ``` ## Additional considerations ### Performance considerations For large datasets, consider creating indexes on frequently extracted components to improve query performance: ```sql CREATE INDEX idx_events_year_month ON events (EXTRACT(YEAR FROM event_timestamp), EXTRACT(MONTH FROM event_timestamp)); ``` This creates an index on the year and month components of the event timestamp, which can speed up queries that filter or group by these components. ## Resources - [PostgreSQL documentation: Date/Time Functions and Operators](https://www.postgresql.org/docs/current/functions-datetime.html) - [PostgreSQL documentation: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) --- # Source: https://neon.com/llms/functions-introduction.txt # Postgres functions > The document outlines the use and implementation of PostgreSQL functions within the Neon database, detailing how to create, manage, and utilize these functions to enhance database operations. ## Source - [Postgres functions HTML](https://neon.com/docs/functions/introduction): The original HTML version of this documentation Get started with commonly-used Postgres functions with Neon's function guides. For other functions that Postgres supports, visit the official Postgres [Functions and Operators](https://www.postgresql.org/docs/current/functions.html) documentation. ## Aggregate functions - [array_agg()](https://neon.com/docs/functions/array_agg): Aggregate elements into an array - [avg()](https://neon.com/docs/functions/avg): Calculate the average of a set of values - [count()](https://neon.com/docs/functions/count): Count rows or non-null values in a result set - [max()](https://neon.com/docs/functions/max): Find the maximum value in a set of values - [sum()](https://neon.com/docs/functions/sum): Calculate the sum of a set of values ## Array functions - [array_length()](https://neon.com/docs/functions/array_length): Determine the length of an array ## Date / Time functions - [age()](https://neon.com/docs/functions/age): Calculate the difference between timestamps or between a timestamp and the current date/time - [current_timestamp](https://neon.com/docs/functions/current_timestamp): Get the current date and time - [date_trunc()](https://neon.com/docs/functions/date_trunc): Truncate date/time values to a specified precision - [extract()](https://neon.com/docs/functions/extract): Extract date and time components from timestamps and intervals - [now()](https://neon.com/docs/functions/now): Get the current date and time ## JSON functions - [array_to_json()](https://neon.com/docs/functions/array_to_json): Convert an SQL array to a JSON array - [json()](https://neon.com/docs/functions/json): Transform JSON data into relational views - [json_agg()](https://neon.com/docs/functions/json_agg): Aggregate values into a JSON array - [json_array_elements()](https://neon.com/docs/functions/json_array_elements): Expand a JSON array into a set of rows - [jsonb_array_elements()](https://neon.com/docs/functions/jsonb_array_elements): Expand a JSONB array into a set of rows - [json_build_object()](https://neon.com/docs/functions/json_build_object): Build a JSON object out of a variadic argument list - [json_each()](https://neon.com/docs/functions/json_each): Expand JSON into a record per key-value pair - [json_exists()](https://neon.com/docs/functions/json_exists): Check for Values in JSON Data Using SQL/JSON Path Expressions - [json_extract_path()](https://neon.com/docs/functions/json_extract_path): Extract a JSON sub-object at the specified path - [json_extract_path_text()](https://neon.com/docs/functions/json_extract_path_text): Extract a JSON sub-object at the specified path as text - [json_object()](https://neon.com/docs/functions/json_object): Create a JSON object from key-value pairs - [json_populate_record()](https://neon.com/docs/functions/json_populate_record): Cast a JSON object to a record - [json_query()](https://neon.com/docs/functions/json_query): Extract and Transform JSON Values with SQL/JSON Path Expressions - [json_scalar()](https://neon.com/docs/functions/json_scalar): Convert Text and Binary Data to JSON Values - [json_serialize()](https://neon.com/docs/functions/json_serialize): Convert JSON Values to Text or Binary Format - [json_table()](https://neon.com/docs/functions/json_table): Transform JSON data into relational views - [json_to_record()](https://neon.com/docs/functions/json_to_record): Convert a JSON object to a record - [json_value()](https://neon.com/docs/functions/json_value): Extract and Convert JSON Scalar Values - [jsonb_each()](https://neon.com/docs/functions/jsonb_each): Expand JSONB into a record per key-value pair - [jsonb_extract_path()](https://neon.com/docs/functions/jsonb_extract_path): Extract a JSONB sub-object at the specified path - [jsonb_extract_path_text()](https://neon.com/docs/functions/jsonb_extract_path_text): Extract a JSONB sub-object at the specified path as text - [jsonb_object()](https://neon.com/docs/functions/jsonb_object): Create a JSONB object from key-value pairs - [jsonb_populate_record()](https://neon.com/docs/functions/jsonb_populate_record): Cast a JSONB object to a record - [jsonb_to_record()](https://neon.com/docs/functions/jsonb_to_record): Convert a JSONB object to a record ## Mathematical functions - [abs()](https://neon.com/docs/functions/math-abs): Calculate the absolute value of a number - [random()](https://neon.com/docs/functions/math-random): Generate a random number between 0 and 1 - [round()](https://neon.com/docs/functions/math-round): Round numbers to a specified precision ## String functions - [concat()](https://neon.com/docs/functions/concat): Concatenate strings - [lower()](https://neon.com/docs/functions/lower): Convert a string to lowercase - [substring()](https://neon.com/docs/functions/substring): Extract a substring from a string - [regexp_match()](https://neon.com/docs/functions/regexp_match): Extract substrings matching a regular expression pattern - [regexp_replace()](https://neon.com/docs/functions/regexp_replace): Replace substrings matching a regular expression pattern - [trim()](https://neon.com/docs/functions/trim): Remove leading and trailing characters from a string ## Window functions - [dense_rank()](https://neon.com/docs/functions/dense_rank): Return the rank of the current row without gaps - [lag()](https://neon.com/docs/functions/window-lag): Access values from previous rows in a result set - [lead()](https://neon.com/docs/functions/window-lead): Access values from subsequent rows in a result set - [rank()](https://neon.com/docs/functions/window-rank): Assign ranks to rows within a result set --- # Source: https://neon.com/llms/functions-json.txt # Postgres json() Function > The document details the usage of the Postgres `json()` function within Neon, explaining how to store, query, and manipulate JSON data in a PostgreSQL database environment. ## Source - [Postgres json() Function HTML](https://neon.com/docs/functions/json): The original HTML version of this documentation The `json()` function provides a robust way to convert text or binary data into `JSON` values. This new function offers enhanced control over `JSON` parsing, including options for handling duplicate keys and encoding specifications. Use `json()` when you need to: - Convert text strings into `JSON` values - Parse UTF8-encoded binary data as `JSON` - Validate `JSON` structure during conversion - Control handling of duplicate object keys ## Function signature The `json()` function uses the following syntax: ```sql json( expression -- Input text or bytea [ FORMAT JSON [ ENCODING UTF8 ]] -- Optional format specification [ { WITH | WITHOUT } UNIQUE [ KEYS ]] -- Optional duplicate key handling ) → json ``` Parameters: - `expression`: Input text or bytea string to convert - `FORMAT JSON`: Explicitly specifies `JSON` format (optional) - `ENCODING UTF8`: Specifies UTF8 encoding for bytea input (optional) - `WITH|WITHOUT UNIQUE [KEYS]`: Controls duplicate key handling (optional) ## Example usage Let's explore various ways to use the `json()` function with different inputs and options. ### Basic JSON conversion ```sql -- Convert a simple string to JSON SELECT json('{"name": "Alice", "age": 30}'); ``` ```text # | json -------------------------------- 1 | {"name": "Alice", "age": 30} ``` ```sql -- Convert a JSON array SELECT json('[1, 2, 3, "four", true, null]'); ``` ```text # | json -------------------------------- 1 | [1, 2, 3, "four", true, null] ``` ```sql -- Convert nested JSON structures SELECT json('{ "user": { "name": "Bob", "contacts": { "email": "bob@example.com", "phone": "+1-555-0123" } }, "active": true }'); ``` ```text # | json --------------------------------------------------------------------------------------------------------------------- 1 | { "user": { "name": "Bob", "contacts": { "email": "bob@example.com", "phone": "+1-555-0123" } }, "active": true } ``` ### Handling duplicate keys ```sql -- Without UNIQUE keys (allows duplicates) SELECT json('{"a": 1, "b": 2, "a": 3}' WITHOUT UNIQUE); ``` ```text # | json ---------------------------- 1 | {"a": 1, "b": 2, "a": 3} ``` ```sql -- With UNIQUE keys SELECT json('{"a": 1, "b": 2, "c": 3}' WITH UNIQUE); ``` ```text # | json ---------------------------- 1 | {"a": 1, "b": 2, "c": 3} ``` ```sql -- This will raise an error due to duplicate 'a' key SELECT json('{"a": 1, "b": 2, "a": 3}' WITH UNIQUE); ``` ```text ERROR: duplicate JSON object key value (SQLSTATE 22030) ``` ### Working with binary data ```sql -- Convert UTF8-encoded bytea to JSON SELECT json( '\x7b226e616d65223a22416c696365227d'::bytea FORMAT JSON ENCODING UTF8 ); ``` ```text # | json --------------------- 1 | {"name": "Alice"} ``` ```sql -- Convert bytea with explicit format and uniqueness check SELECT json( '\x7b226964223a312c226e616d65223a22426f62227d'::bytea FORMAT JSON ENCODING UTF8 WITH UNIQUE ); ``` ```text # | json ---------------------------- 1 | {"id": 1, "name": "Bob"} ``` ### Combining with other JSON functions: ```sql -- Convert and extract SELECT json('{"users": [{"id": 1}, {"id": 2}]}')->'users'->0->>'id' AS user_id; ``` ```text # | user_id ----------- 1 | 1 ``` ```sql -- Convert and check structure SELECT json_typeof(json('{"a": [1,2,3]}')->'a'); ``` ```text # | json_typeof --------------- 1 | array ``` ## Error handling The `json()` function performs validation during conversion and can raise several types of errors: ```sql -- Invalid JSON syntax (raises error) SELECT json('{"name": "Alice" "age": 30}'); ``` ```text ERROR: invalid input syntax for type json (SQLSTATE 22P02) ``` ```sql -- Invalid UTF8 encoding (raises error) SELECT json('\xFFFFFFFF'::bytea FORMAT JSON ENCODING UTF8); ``` ```text ERROR: invalid byte sequence for encoding "UTF8": 0xff (SQLSTATE 22021) ``` ## Common use cases ### Data validation ```sql -- Validate JSON structure before insertion CREATE TABLE user_profiles ( id SERIAL PRIMARY KEY, profile_data json ); -- Insert with validation INSERT INTO user_profiles (profile_data) VALUES ( json('{ "name": "Alice", "age": 30, "interests": ["reading", "hiking"] }' WITH UNIQUE) ); ``` ## Additional considerations 1. Use appropriate input validation: - Use `WITH UNIQUE` when duplicate keys should be prevented - Consider `FORMAT JSON` for explicit parsing requirements 2. Error handling best practices: - Implement proper error handling for invalid JSON - Validate input before bulk operations ## Learn more - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_agg.txt # Postgres json_agg() function > The document explains the usage of the Postgres `json_agg()` function within Neon, detailing how it aggregates multiple rows into a JSON array, facilitating data manipulation and retrieval in JSON format. ## Source - [Postgres json_agg() function HTML](https://neon.com/docs/functions/json_agg): The original HTML version of this documentation The Postgres `json_agg()` function is an aggregate function that collects values from multiple rows and returns them as a single JSON array. It's particularly useful when you need to denormalize data for performance reasons or prepare data for front-end applications and APIs. For example, you might use it to aggregate product reviews for an e-commerce application or collect all posts by a user on a social media platform. ## Function signature The `json_agg()` function has this simple form: ```sql json_agg(expression) -> json ``` - `expression`: The value to be aggregated into a JSON array. This can be a column, a complex expression, or even a subquery. When used in this manner, the order of the values in the resulting JSON array is not guaranteed. Postgres supports an extended syntax for aggregating values in a specific order. ```sql json_agg(expression ORDER BY sort_expression [ASC | DESC] [NULLS { FIRST | LAST }]) -> json ``` - `expression`: The value to be aggregated into a JSON array. - `ORDER BY`: Specifies the order in which the values should be aggregated. - `sort_expression`: The expression to sort by. - `ASC | DESC`: Specifies ascending or descending order (default is ASC). - `NULLS { FIRST | LAST }`: Specifies whether nulls should be first or last in the ordering (default depends on `ASC` or `DESC`). ## Example usage Consider an `orders` table with columns `order_id`, `product_name`, and `quantity`. We can use `json_agg()` to create a JSON array of all products in each order. ```sql WITH orders AS ( SELECT * FROM ( VALUES (1, 'Widget A', 2), (1, 'Widget B', 1), (2, 'Widget C', 3), (2, 'Widget D', 2) ) AS t(order_id, product_name, quantity) ) SELECT order_id, json_agg(json_build_object('product', product_name, 'quantity', quantity)) AS products FROM orders GROUP BY order_id; ``` This query groups the orders by `order_id` and creates a JSON array of products for each order. ```text order_id | products ----------+-------------------------------------------------------------------------------------- 1 | [{"product" : "Widget A", "quantity" : 2}, {"product" : "Widget B", "quantity" : 1}] 2 | [{"product" : "Widget C", "quantity" : 3}, {"product" : "Widget D", "quantity" : 2}] (2 rows) ``` ## Advanced examples ### Ordered aggregation You can specify an order for the aggregated values, as suggested in the function signature section. Here's an example: ```sql WITH reviews AS ( SELECT 1 AS product_id, 'Great product!' AS comment, 5 AS rating, '2023-01-15'::date AS review_date UNION ALL SELECT 1, 'Could be better', 3, '2023-02-01'::date UNION ALL SELECT 1, 'Awesome!', 5, '2023-01-20'::date UNION ALL SELECT 2, 'Not bad', 4, '2023-01-10'::date ) SELECT product_id, json_agg( comment || ' (' || rating || ' stars)' ORDER BY review_date DESC ) AS reviews FROM reviews GROUP BY product_id; ``` This query aggregates product reviews into a JSON array, ordered by the review date in descending order. ```text product_id | reviews ------------+--------------------------------------------------------------------------------- 1 | ["Could be better (3 stars)", "Awesome! (5 stars)", "Great product! (5 stars)"] 2 | ["Not bad (4 stars)"] (2 rows) ``` ### Combining with other JSON functions `json_agg()` can be combined with other JSON functions for more complex transformations: ```sql WITH sales AS ( SELECT 'North' AS region, 'Q1' AS quarter, 100000 AS amount UNION ALL SELECT 'North', 'Q2', 120000 UNION ALL SELECT 'South', 'Q1', 80000 UNION ALL SELECT 'South', 'Q2', 90000 ) SELECT region, json_agg( (SELECT json_build_object('quarter', quarter, 'amount', amount)) ORDER BY quarter DESC ) AS quarterly_sales FROM sales GROUP BY region; ``` This query uses `json_build_object()` in combination with `json_agg()` to create an array of quarterly sales data, for each region. ```text region | quarterly_sales --------+-------------------------------------------------------------------------------- North | [{"quarter" : "Q2", "amount" : 120000}, {"quarter" : "Q1", "amount" : 100000}] South | [{"quarter" : "Q2", "amount" : 90000}, {"quarter" : "Q1", "amount" : 80000}] (2 rows) ``` ## Additional considerations ### Performance implications While `json_agg()` is powerful for creating JSON structures, it can be memory-intensive for large datasets since its output size linearly increases with the number of rows. When working with very large tables, consider using pagination or limiting the number of rows aggregated. ### Alternative functions - `array_agg()`: Aggregates values into a Postgres array instead of a JSON array. - `jsonb_agg()`: Similar to `json_agg()`, but returns a `jsonb` type, which is more efficient for storage and processing. - `json_agg_strict()`: Aggregates values into a JSON array, skipping over the NULL values. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_array_elements.txt # Postgres json_array_elements() function > The document explains the usage of the Postgres `json_array_elements()` function, detailing how it can be utilized within Neon to expand a JSON array into a set of JSON values. ## Source - [Postgres json_array_elements() function HTML](https://neon.com/docs/functions/json_array_elements): The original HTML version of this documentation You can use `json_array_elements` function to expand a `JSON` array into a set of rows, each containing one element of the array. It is a simpler option compared to complex looping logic. It is also more efficient than executing the same operation on the application side by reducing data transfer and processing overhead. ## Function signature ```sql json_array_elements(json) ``` ## `json_array_elements` example Suppose you have a `developers` table with information about developers: **developers** ```sql CREATE TABLE developers ( id INT PRIMARY KEY, name TEXT, skills JSON ); INSERT INTO developers (id, name, skills) VALUES (1, 'Alice', '["Java", "Python", "SQL"]'), (2, 'Bob', '["C++", "JavaScript"]'), (3, 'Charlie', '["HTML", "CSS", "React"]'); ``` ```text | id | name | skills |----|---------|--------------------------- | 1 | Alice | ["Java", "Python", "SQL"] | 2 | Bob | ["C++", "JavaScript"] | 3 | Charlie | ["HTML", "CSS", "React"] ``` Now, let's say you want to extract a row for each skill from the skills `JSON` array. You can use `json_array_elements` to do that: ```sql SELECT id, name, skill FROM developers, json_array_elements(skills) AS skill; ``` This query returns the following result: ```text | id | name | skill | |----|---------|--------------| | 1 | Alice | "Java" | | 1 | Alice | "Python" | | 1 | Alice | "SQL" | | 2 | Bob | "C++" | | 2 | Bob | "JavaScript" | | 3 | Charlie | "HTML" | | 3 | Charlie | "CSS" | | 3 | Charlie | "React" | ``` ## Advanced examples This section shows advanced `json_array_elements` examples. ### `json_array_elements` with nested data Let's consider a scenario where we have a `products` table storing information about products. The table schema and data are provided below. **products** ```sql CREATE TABLE products ( id INTEGER PRIMARY KEY, name TEXT, details JSON ); INSERT INTO products (id, name, details) VALUES (1, 'T-Shirt', '{"sizes": ["S", "M", "L", "XL"], "colors": ["Red", "Blue", "Green"]}'), (2, 'Hoodie', '{"sizes": ["XS", "S", "M", "L", "XL"], "colors": ["Black", "Gray"]}'), (3, 'Dress', '{"sizes": ["S", "M", "L"], "colors": ["Pink", "Purple", "Black"]}'), (4, 'Jeans', '{"sizes": ["28", "30", "32", "34"], "colors": ["Blue", "Black"]}'), (5, 'Jacket', '{"sizes": ["S", "M", "L", "XL"], "colors": ["Black", "Brown", "Navy"]}'); ``` ```text | id | name | details | |----|---------|------------------------------------------------------------------------| | 1 | T-Shirt | {"sizes": ["S", "M", "L", "XL"], "colors": ["Red", "Blue", "Green"]} | | 2 | Hoodie | {"sizes": ["XS", "S", "M", "L", "XL"], "colors": ["Black", "Gray"]} | | 3 | Dress | {"sizes": ["S", "M", "L"], "colors": ["Pink", "Purple", "Black"]} | | 4 | Jeans | {"sizes": ["28", "30", "32", "34"], "colors": ["Blue", "Black"]} | | 5 | Jacket | {"sizes": ["S", "M", "L", "XL"], "colors": ["Black", "Brown", "Navy"]} | ``` The `json_array_elements` function can be used to get all the combinations of size and color for a specific product. For example: ```sql SELECT id, name, size, color FROM products AS p, json_array_elements(p.details -> 'sizes') AS size, json_array_elements(p.details -> 'colors') AS color WHERE name = 'T-Shirt'; ``` This query returns the following values: ```text | id | name | size | color | |----|---------|------|--------| | 1 | T-Shirt | "S" | "Red" | | 1 | T-Shirt | "S" | "Blue" | | 1 | T-Shirt | "S" | "Green"| | 1 | T-Shirt | "M" | "Red" | | 1 | T-Shirt | "M" | "Blue" | | 1 | T-Shirt | "M" | "Green"| | 1 | T-Shirt | "L" | "Red" | | 1 | T-Shirt | "L" | "Blue" | | 1 | T-Shirt | "L" | "Green"| | 1 | T-Shirt | "XL" | "Red" | | 1 | T-Shirt | "XL" | "Blue" | | 1 | T-Shirt | "XL" | "Green"| ``` ## Filtering `json_array_elements` You can use the `json_array_elements` function to extract the sizes from the `JSON` data and then filter the products based on a specific color (or size), as in this example: ```sql SELECT * FROM products WHERE 'Blue' IN ( SELECT json_array_elements_text(details->'colors') ); ``` This query returns the following values: ```text | id | name | details | |----|----------|----------------------------------------------------------------------| | 1 | T-Shirt | {"sizes": ["S", "M", "L", "XL"], "colors": ["Red", "Blue", "Green"]} | | 4 | Jeans | {"sizes": ["28", "30", "32", "34"], "colors": ["Blue", "Black"]} | ``` ## Handling `NULL` in `json_array_elements` This example updates the table to insert another product (`Socks`) with one of the values in the `sizes` as `null`: **products** ```sql INSERT INTO products (id, name, details) VALUES (6, 'Socks', '{"sizes": ["S", null, "L", "XL"], "colors": ["White", "Black", "Gray"]}'); ``` ```text | id | name | details | |----|---------|-------------------------------------------------------------------------| | 6 | Socks | {"sizes": ["S", null, "L", "XL"], "colors": ["White", "Black", "Gray"]} | ``` Querying for `Socks` shows how `null` values in an array are handled: ```sql SELECT id, name, size FROM products AS p, json_array_elements(p.details -> 'sizes') AS size WHERE name = 'Socks'; ``` This query returns the following values: ```text | id | name | size | |----|-------|------| | 6 | Socks | "S" | | 6 | Socks | null | | 6 | Socks | "L" | | 6 | Socks | "XL" | ``` ### Nested arrays in `json_array_elements` You can also handle nested arrays with `json_array_elements`. Consider a scenario where each product has multiple variants, and each variant has an array of sizes and an array of colors. This example uses an `elecronics_products` table, shown below. **electronics_products** ```sql CREATE TABLE electronics_products ( id INTEGER PRIMARY KEY, name TEXT, details JSON ); INSERT INTO electronics_products (id, name, details) VALUES (1, 'Laptop', '{"variants": [{"model": "A", "sizes": ["13 inch", "15 inch"], "colors": ["Silver", "Black"]}, {"model": "B", "sizes": ["15 inch", "17 inch"], "colors": ["Gray", "White"]}]}'), (2, 'Smartphone', '{"variants": [{"model": "X", "sizes": ["5.5 inch", "6 inch"], "colors": ["Black", "Gold"]}, {"model": "Y", "sizes": ["6.2 inch", "6.7 inch"], "colors": ["Blue", "Red"]}]}'); ``` ```text | id | name | details | |----|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | Laptop | {"variants": [{"model": "A", "sizes": ["13 inch", "15 inch"], "colors": ["Silver", "Black"]}, {"model": "B", "sizes": ["15 inch", "17 inch"], "colors": ["Gray", "White"]}]} | | 2 | Smartphone | {"variants": [{"model": "X", "sizes": ["5.5 inch", "6 inch"], "colors": ["Black", "Gold"]}, {"model": "Y", "sizes": ["6.2 inch", "6.7 inch"], "colors": ["Blue", "Red"]}]} | ``` To handle the nested arrays and extract information about each variant, you can use the `json_array_elements` function like this: ```sql SELECT id, name, variant->>'model' AS model, size, color FROM electronics_products, json_array_elements(details->'variants') AS variant, json_array_elements_text(variant->'sizes') AS t1(size), json_array_elements_text(variant->'colors') AS t2(color); ``` This query returns the following values: ```text | id | name | model | size | color | |----|------------|-------|----------|--------| | 1 | Laptop | A | 13 inch | Silver | | 1 | Laptop | A | 13 inch | Black | | 1 | Laptop | A | 15 inch | Silver | | 1 | Laptop | A | 15 inch | Black | | 1 | Laptop | B | 15 inch | Gray | | 1 | Laptop | B | 15 inch | White | | 1 | Laptop | B | 17 inch | Gray | | 1 | Laptop | B | 17 inch | White | | 2 | Smartphone | X | 5.5 inch | Black | | 2 | Smartphone | X | 5.5 inch | Gold | | 2 | Smartphone | X | 6 inch | Black | | 2 | Smartphone | X | 6 inch | Gold | | 2 | Smartphone | Y | 6.2 inch | Blue | | 2 | Smartphone | Y | 6.2 inch | Red | | 2 | Smartphone | Y | 6.7 inch | Blue | | 2 | Smartphone | Y | 6.7 inch | Red | ``` ## Additional considerations This section outlines additional considerations including alternative functions and `JSON` array order. ### Alternates to `json_array_elements` - `jsonb_array_elements` - Consider this variant for performance benefits with `jsonb` data. `jsonb_array_elements` only accepts `jsonb` data, while `json_array_elements` works with both `json` and `jsonb`. It is typically faster, especially for larger arrays, due to its optimization for the binary `jsonb` format. - `json_array_elements_text` - While `json_array_elements` returns each extracted element as a `JSON` value, `json_array_elements_text` returns each extracted element as a plain text _string_. ### Ordering `json_array_elements` output using `WITH ORDINALITY` If the order of the elements is important, consider using the `WITH ORDINALITY` option: ```sql SELECT id, name, skill, ordinality FROM developers, json_array_elements(skills) WITH ORDINALITY AS t(skill, ordinality); ``` This query returns the following values: ```text | id | name | skill | ordinality | |----|---------|--------------|------------| | 1 | Alice | "Java" | 1 | | 1 | Alice | "Python" | 2 | | 1 | Alice | "SQL" | 3 | | 2 | Bob | "C++" | 1 | | 2 | Bob | "JavaScript" | 2 | | 3 | Charlie | "HTML" | 1 | | 3 | Charlie | "CSS" | 2 | | 3 | Charlie | "React" | 3 | ``` The `WITH ORDINALITY` option in the query adds an `ordinality` column representing the original order of the skills in the array. ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-json_build_object.txt # Postgres json_build_object() function > The document explains the usage of the Postgres `json_build_object()` function, detailing how it constructs JSON objects from key-value pairs within the Neon database environment. ## Source - [Postgres json_build_object() function HTML](https://neon.com/docs/functions/json_build_object): The original HTML version of this documentation `json_build_object` is used to construct a JSON object from a set of key-value pairs, creating a JSON representation of a row or set of rows. This has potential performance benefits compared to converting query results to JSON on the application side. ## Function signature ```sql json_build_object ( VARIADIC "any" ) → json ``` ## `json_build_object` example Let's consider a scenario where we have a table storing information about users: **users** ```text | id | name | age | city |----|----------|-----|---------- | 1 | John Doe | 30 | New York | | 2 | Jane Doe | 25 | London | ``` Create the `users` table and insert some data into it: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, age INTEGER, city TEXT ); INSERT INTO users (name, age, city) VALUES ('John Doe', 30, 'New York'), ('Jane Doe', 25, 'London'); ``` Use `json_build_object` to create a JSON structure with user information: ```sql SELECT id, json_build_object( 'name', name, 'age', age, 'city', city ) AS user_data FROM users; ``` This query returns the following results: ```text | id | user_data |----|-------------------------------------------------------- | 1 | {"name" : "John Doe", "age" : 30, "city" : "New York"} | 2 | {"name" : "Jane Doe", "age" : 25, "city" : "London"} ``` ## Advanced examples ### Nested objects with `json_build_object` Let's say we have a table of products with an `attributes` column containing JSON data: **products** ```text | id | name | price | description | category | attributes |----|------------|-------|-----------------------------------|----------|---------------------------------------------------- | 1 | T-Shirt | 25.99 | A comfortable cotton T-Shirt | Clothing | {"size": "Medium", "color": "Blue", "rating": 4.5} | 2 | Coffee Mug | 12.99 | A ceramic mug with a funny design | Kitchen | {"size": "Small", "color": "White", "rating": 3.8} | 3 | Sneakers | 49.99 | Sporty sneakers for everyday use | Footwear | {"size": "10", "color": "Black", "rating": 4.2} ``` Create the `products` table and insert some data into it: ```sql CREATE TABLE products ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, price DECIMAL(5, 2) NOT NULL, description TEXT, category TEXT, attributes JSON ); INSERT INTO products (name, price, description, category, attributes) VALUES ('T-Shirt', 25.99, 'A comfortable cotton T-Shirt', 'Clothing', json_build_object( 'color', 'Blue', 'size', 'Medium', 'rating', 4.5 )), ('Coffee Mug', 12.99, 'A ceramic mug with a funny design', 'Kitchen', json_build_object( 'color', 'White', 'size', 'Small', 'rating', 3.8 )), ('Sneakers', 49.99, 'Sporty sneakers for everyday use', 'Footwear', json_build_object( 'color', 'Black', 'size', '10', 'rating', 4.2 )); ``` Use `json_build_object` to build a nested JSON object that represents the details of individual products: ```sql SELECT id, name, price, json_build_object( 'category', category, 'description', description, 'attributes', json_build_object( 'color', attributes->>'color', 'size', attributes->>'size' ) ) AS details FROM products; ``` This query returns the following results: ```text | id | name | price | details |----|-------------|-------|------------------------------------------------------------------------------------------------------------------------------------- | 1 | T-Shirt | 25.99 | {"category" : "Clothing", "description" : "A comfortable cotton T-Shirt", "attributes" : {"color" : "Blue", "size" : "Medium"}} | 2 | Coffee Mug | 12.99 | {"category" : "Kitchen", "description" : "A ceramic mug with a funny design", "attributes" : {"color" : "White", "size" : "Large"}} ``` ### Order `json_build_object` output Combine `json_build_object` with `ORDER BY` to sort the results based on a specific attribute within the JSON structure. For example, you can build a `JSON` structure with `json_build_object` from the contents of the above `products` table, and then order the results based on `rating`. ```sql SELECT id, name, price, json_build_object( 'category', category, 'description', description, 'attributes', json_build_object( 'color', attributes->>'color', 'size', attributes->>'size', 'rating', attributes->>'rating' ) ) AS details FROM products_with_rating ORDER BY (attributes->>'rating')::NUMERIC DESC; ``` `ORDER BY` was to order the results based on the descending order of rating. This query returns the following results: ```text | id | name | price | details |----|------------|-------|------------------------------------------------------------------------------------------------------------------------------------------------------- | 1 | T-Shirt | 25.99 | {"category" : "Clothing", "description" : "A comfortable cotton T-Shirt", "attributes" : {"color" : "Blue", "size" : "Medium", "rating" : "4.5"}} | 3 | Sneakers | 49.99 | {"category" : "Footwear", "description" : "Sporty sneakers for everyday use", "attributes" : {"color" : "Black", "size" : "10", "rating" : "4.2"}} | 2 | Coffee Mug | 12.99 | {"category" : "Kitchen", "description" : "A ceramic mug with a funny design", "attributes" : {"color" : "White", "size" : "Small", "rating" : "3.8"}} ``` ### Grouped `json_build_object` output To create a `JSON` object that groups the total price for each category of products in the products table: ```sql SELECT category, json_build_object( 'total_price', sum(price) ) AS category_total_price FROM products GROUP BY category; ``` This query returns the following results: ```text | category | category_total_price |----------|------------------------- | Kitchen | {"total_price" : 12.99} | Clothing | {"total_price" : 25.99} ``` ## Additional considerations ### Performance and indexing The performance of the `json_build_object` depends on various factors including the number of key-value pairs, nested levels (deeply nested objects can be more expensive to build). Consider using `JSONB` data type with `jsonb_build_object` for better performance. If your `JSON` objects have nested structures, indexing on specific paths within the nested data can be beneficial for targeted queries. ### Alternative functions Depending on your requirements, you might want to consider similar functions: - [json_object](https://neon.com/docs/functions/json_object) - Builds a JSON object out of a text array. - `json_agg` - Aggregates values, as a JSON array. - `row_to_json` - Returns a row as a JSON object. - `json_object_agg` - Aggregates key-value pairs into a JSON object. ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-json_each.txt # Postgres json_each() function > The document details the usage of the Postgres `json_each()` function within Neon, explaining how it expands JSON objects into a set of key-value pairs, facilitating data manipulation and retrieval. ## Source - [Postgres json_each() function HTML](https://neon.com/docs/functions/json_each): The original HTML version of this documentation The `json_each` function in Postgres is used to expand a `JSON` object into a set of key-value pairs. It is useful when you need to iterate over a `JSON` object's keys and values, such as when you're working with dynamic `JSON` structures where the schema is not fixed. Another important use case is performing data transformations and analytics. ## Function signature ```sql json_each(json JSON) -> SETOF record(key text, value json) ``` The function returns a set of rows, each containing a key and the corresponding value for each field in the input `JSON` object. The key is of type `text`, while the value is of type `json`. ## Example usage Consider a `JSON` object representing a user's profile information. The `JSON` data will have multiple attributes and might look like this: ```json { "username": "johndoe", "age": 30, "email": "johndoe@example.com" } ``` We can go over all the fields in the profile `JSON` object using `json_each`, and produce a row for each key-value pair. ```sql SELECT key, value FROM json_each('{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}'); ``` This query returns the following results: ```text | key | value | |----------|-----------------------| | username | "johndoe" | | age | 30 | | email | "johndoe@example.com" | ``` ## Advanced examples ### `json_each` custom column names You can use `AS` to specify custom column names for the key and value columns. ```sql SELECT attr_name, attr_value FROM json_each('{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}') AS user_data(attr_name, attr_value); ``` This query returns the following results: ```text | attr_name | attr_value | |-----------|-----------------------| | username | "johndoe" | | age | 30 | | email | "johndoe@example.com" | ``` ### Use `json_each` as a table or row source Since `json_each` returns a set of rows, you can use it as a table source in a `FROM` clause. This lets us join the expanded `JSON` data in the output with other tables. Here, we're joining each row in the `user_data` table with the output of `json_each`: ```sql CREATE TABLE user_data ( id INT, profile JSON ); INSERT INTO user_data (id, profile) VALUES (123, '{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}'), (140, '{"username": "mikesmith", "age": 40, "email": "mikesmith@example.com"}'); SELECT id, key, value FROM user_data, json_each(user_data.profile); ``` This query returns the following results: ```text | id | key | value | |-----|----------|-------------------------| | 123 | username | "johndoe" | | 123 | age | 30 | | 123 | email | "johndoe@example.com" | | 140 | username | "mikesmith" | | 140 | age | 40 | | 140 | email | "mikesmith@example.com" | ``` ## Additional considerations ### Performance implications When working with large `JSON` objects, `json_each` may lead to performance overhead, as it expands each key-value pair into a separate row. ### Alternative functions - `json_each_text` - Similar functionality to `json_each` but returns the value as a text type instead of `JSON`. - `json_object_keys` - It returns only the set of keys in the `JSON` object, without the values. - `jsonb_each` - It provides the same functionality as `json_each`, but accepts `JSONB` input instead of `JSON`. ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_exists.txt # Postgres JSON_EXISTS() Function > The document details the usage of the Postgres JSON_EXISTS() function within Neon, explaining its syntax and application for checking the existence of specific keys or values in JSON data. ## Source - [Postgres JSON_EXISTS() Function HTML](https://neon.com/docs/functions/json_exists): The original HTML version of this documentation The `JSON_EXISTS()` function introduced in PostgreSQL 17 provides a powerful way to check for the existence of values within `JSON` data using `SQL/JSON` path expressions. This function is particularly useful for validating `JSON` structure and implementing conditional logic based on the presence of specific `JSON` elements. Use `JSON_EXISTS()` when you need to: - Validate the presence of specific `JSON` paths - Implement conditional logic based on `JSON` content - Filter `JSON` data based on complex conditions - Verify `JSON` structure before processing ## Function signature The `JSON_EXISTS()` function uses the following syntax: ```sql JSON_EXISTS( context_item, -- JSON/JSONB input path_expression -- SQL/JSON path expression [ PASSING { value AS varname } [, ...] ] [{ TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ] ) → boolean ``` Parameters: - `context_item`: `JSON` or `JSONB` input to evaluate - `path_expression`: `SQL/JSON` path expression to check - `PASSING`: Optional clause to pass variables for use in the path expression - `ON ERROR`: Controls behavior when path evaluation fails (defaults to `FALSE`) ## Example usage Let's explore various ways to use the `JSON_EXISTS()` function with different scenarios and options. ### Basic existence checks ```sql -- Check if a simple key exists SELECT JSON_EXISTS('{"name": "Alice", "age": 30}', '$.name'); ``` ```text # | json_exists -------------- 1 | t ``` ```sql -- Check for a nested key SELECT JSON_EXISTS( '{"user": {"details": {"email": "alice@example.com"}}}', '$.user.details.email' ); ``` ```text # | json_exists -------------- 1 | t ``` ### Array operations ```sql -- Check if array contains any elements SELECT JSON_EXISTS('{"numbers": [1,2,3,4,5]}', '$.numbers[*]'); ``` ```text # | json_exists -------------- 1 | t ``` ```sql -- Check for specific array element SELECT JSON_EXISTS('{"tags": ["postgres", "json", "database"]}', '$.tags[3]'); ``` ```text # | json_exists -------------- 1 | f ``` ### Conditional checks ```sql -- Check for values meeting a condition SELECT JSON_EXISTS( '{"scores": [85, 92, 78, 95]}', '$.scores[*] ? (@ > 90)' ); ``` ```text # | json_exists -------------- 1 | t ``` ### Using PASSING clause ```sql -- Check using a variable SELECT JSON_EXISTS( '{"temperature": 25}', 'strict $.temperature ? (@ > $threshold)' PASSING 30 AS threshold ); ``` ```text # | json_exists -------------- 1 | f ``` ### Error handling ```sql -- Default behavior (returns FALSE) SELECT JSON_EXISTS( '{"data": [1,2,3]}', 'strict $.data[5]' ); ``` ```text # | json_exists -------------- 1 | f ``` ```sql -- Using ERROR ON ERROR SELECT JSON_EXISTS( '{"data": [1,2,3]}', 'strict $.data[5]' ERROR ON ERROR ); ``` ```text ERROR: jsonpath array subscript is out of bounds (SQLSTATE 22033) ``` ```sql -- Using UNKNOWN ON ERROR SELECT JSON_EXISTS( '{"data": [1,2,3]}', 'strict $.data[5]' UNKNOWN ON ERROR ); ``` ```text # | json_exists -------------- 1 | ``` ## Practical applications ### Data validation ```sql -- Validate required fields before insertion CREATE TABLE user_profiles ( id SERIAL PRIMARY KEY, data JSONB NOT NULL, CONSTRAINT valid_profile CHECK ( JSON_EXISTS(data, '$.email') AND JSON_EXISTS(data, '$.username') ) ); -- This insert will succeed INSERT INTO user_profiles (data) VALUES ( '{"email": "user@example.com", "username": "user123"}'::jsonb ); -- This insert will fail INSERT INTO user_profiles (data) VALUES ( '{"username": "user123"}'::jsonb ); ``` ```text ERROR: new row for relation "user_profiles" violates check constraint "valid_profile" (SQLSTATE 23514) ``` ### Conditional queries ```sql -- Filter records based on JSON content SELECT * FROM user_profiles WHERE JSON_EXISTS( data, '$.preferences.notifications ? (@ == true)' ); ``` ## Best practices 1. Error handling: - Use appropriate `ON ERROR` clauses based on your requirements - Consider `UNKNOWN ON ERROR` for nullable conditions - Use `ERROR ON ERROR` when validation is critical 2. Performance optimization: - Create _GIN_ indexes on `JSONB` columns for better performance - Use strict mode when path is guaranteed to exist - Combine with other `JSON` functions for complex operations 3. Path expressions: - Use _lax_ mode (default) for optional paths - Leverage path variables with `PASSING` clause for dynamic checks ## Learn more - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) - [SQL/JSON path language](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-PATH) --- # Source: https://neon.com/llms/functions-json_extract_path.txt # Postgres json_extract_path() function > The document explains the usage of the `json_extract_path()` function in PostgreSQL, detailing how Neon users can extract specific elements from JSON data using a specified path. ## Source - [Postgres json_extract_path() function HTML](https://neon.com/docs/functions/json_extract_path): The original HTML version of this documentation You can use the `json_extract_path` function to extract the value at a specified path within a `JSON` document. This approach is performant compared to querying the entire `JSON` payload and processing it on the application side. It is particularly useful when dealing with nested `JSON` structures. ## Function signature ```sql json_extract_path(from_json JSON, VARIADIC path_elems TEXT[]) -> JSON ``` ## Example usage To illustrate the `json_extract_path` function in Postgres, let's consider a scenario where we have a table storing information about books. Each book has a `JSON` column containing details such as `title`, `author`, and publication `year`. You can create the `book` table using the SQL statements shown below. **books** ```sql CREATE TABLE books ( id INT, info JSON ); INSERT INTO books (id, info) VALUES (1, '{"title": "The Catcher in the Rye", "author": "J.D. Salinger", "year": 1951}'), (2, '{"title": "To Kill a Mockingbird", "author": "Harper Lee", "year": 1960}'), (3, '{"title": "1984", "author": "George Orwell", "year": 1949}'); ``` ```text | id | info | |----|------------------------------------------------------------------------------| | 1 | {"title": "The Catcher in the Rye", "author": "J.D. Salinger", "year": 1951} | | 2 | {"title": "To Kill a Mockingbird", "author": "Harper Lee", "year": 1960} | | 3 | {"title": "1984", "author": "George Orwell", "year": 1949} | ``` Now, let's use the `json_extract_path` function to extract the `title` and `author` of each book: ```sql SELECT id, json_extract_path(info, 'title') as title, json_extract_path(info, 'author') as author FROM books; ``` This query returns the following values: ```text | id | title | author | |----|--------------------------|------------------| | 1 | "The Catcher in the Rye" | "J.D. Salinger" | | 2 | "To Kill a Mockingbird" | "Harper Lee" | | 3 | "1984" | "George Orwell" | ``` ## Advanced examples Consider a `products` table that stores information about products in an e-commerce system. The table schema and data are outlined below. **products** ```sql CREATE TABLE products ( id INT, attributes JSON ); INSERT INTO products (id, attributes) VALUES (1, '{"name": "Laptop", "specs": {"brand": "Dell", "RAM": "16GB", "storage": {"type": "SSD", "capacity": "512GB"}}, "tags": ["pc"]}'), (2, '{"name": "Smartphone", "specs": {"brand": "Google", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "256GB"}}, "tags": ["android", "pixel"]}'), (3, '{"name": "Smartphone", "specs": {"brand": "Apple", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "128GB"}}, "tags": ["ios", "iphone"]}'); ``` ```text | id | attributes | |--------|---------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | {"name": "Laptop", "specs": {"brand": "Dell", "RAM": "16GB", "storage": {"type": "SSD", "capacity": "512GB"}}, "tags": ["pc"]} | | 2 | {"name": "Smartphone", "specs": {"brand": "Google", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "256GB"}}, "tags": ["android", "pixel"]} | | 3 | {"name": "Smartphone", "specs": {"brand": "Apple", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "128GB"}}, "tags": ["ios", "iphone"]} | ``` ### Extract from nested JSON objects with `json_extract_path` Let's use `json_extract_path` to retrieve information about the storage type and capacity for each product, demonstrating how to extract values from a nested `JSON` object. ```sql SELECT id, json_extract_path(attributes, 'specs', 'storage', 'type') as storage_type, json_extract_path(attributes, 'specs', 'storage', 'capacity') as storage_capacity FROM products; ``` This query returns the following values: ```text | id | storage_type | storage_capacity | |----|--------------|------------------| | 1 | "SSD" | "512GB" | | 2 | "UFS" | "256GB" | | 3 | "UFS" | "128GB" | ``` ### Extract from array with `json_extract_path` Now, let's use `json_extract_path` to extract information about the associated tags as well, demonstrating how to extract values from a `JSON` array. ```sql SELECT id, json_extract_path(attributes, 'specs', 'storage', 'type') as storage_type, json_extract_path(attributes, 'specs', 'storage', 'capacity') as storage_capacity, json_extract_path(attributes, 'tags', '0') as first_tag, json_extract_path(attributes, 'tags', '1') as second_tag FROM products; ``` This query returns the following values: ```text | id | storage_type | storage_capacity | first_tag | second_tag | |----|--------------|------------------|-----------|------------| | 1 | "SSD" | "512GB" | "pc" | null | | 2 | "UFS" | "256GB" | "android" | "pixel" | | 3 | "UFS" | "128GB" | "ios" | "iphone" | ``` ### Use `json_extract_path` in Joins Let's say you have two tables, `employees` and `departments`, and the `employees` table has a `JSON` column named `details` that contains information about each employee's department. You want to join these tables based on the department information stored in the `JSON` column. The table schemas and data used in this example are shown below. **departments** ```sql CREATE TABLE departments ( department_id SERIAL PRIMARY KEY, department_name VARCHAR(255) ); INSERT INTO departments (department_name) VALUES ('IT'), ('HR'), ('Marketing'); ``` ```text | department_id | department_name | |---------------|------------------| | 1 | IT | | 2 | HR | | 3 | Marketing | ``` **employees** ```sql CREATE TABLE employees ( employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(255), details JSON ); INSERT INTO employees (employee_name, details) VALUES ('John Doe', '{"department": "IT"}'), ('Jane Smith', '{"department": "HR"}'), ('Bob Johnson', '{"department": "Marketing"}'); ``` ```text | employee_id | employee_name | details | |-------------|---------------|-----------------------------| | 1 | John Doe | {"department": "IT"} | | 2 | Jane Smith | {"department": "HR"} | | 3 | Bob Johnson | {"department": "Marketing"} | ``` You can use `JOIN` with `json_extract_path` to retrieve information: ```sql SELECT employees.employee_name, departments.department_name FROM employees JOIN departments ON TRIM(BOTH '"' FROM json_extract_path(employees.details, 'department')::TEXT) = departments.department_name; ``` This query returns the following values: ```test | employee_name | department_name | |---------------|------------------| | John Doe | IT | | Jane Smith | HR | | Bob Johnson | Marketing | ``` The `json_extract_path` function extracts the value of the `department` key from the `JSON` column in the `employees` table. The `JOIN` is then performed based on matching department names. ## Additional considerations ### Performance and Indexing The `json_extract_path` function performs well when extracting data from `JSON` documents, especially compared to extracting data in application code. It allows performing the extraction directly in the database, avoiding transferring entire `JSON` documents to the application. However, performance can degrade with highly nested `JSON` structures and very long text strings. In those cases, using the binary `JSONB` data type and the `jsonb_extract_path` function will likely offer better performance. Indexing `JSON` documents can also significantly improve `json_extract_path` query performance when filtering data based on values extracted from `JSON`. ### Alternative functions - [json_extract_path_text](https://neon.com/docs/functions/json_extract_path_text) - The regular `json_extract_path` function returns the extracted value as a `JSON` object or array, preserving its `JSON` structure, whereas the alternative `json_extract_path_text` function returns the extracted value as a plain text string, casting any `JSON` objects or arrays to their string representations. Use the regular `json_extract_path` function when you need to apply `JSON`-specific functions or operators to the extracted value, requiring `JSON` data types. The alternative `json_extract_path_text` function is preferable if you need to work directly with the extracted value as a string, for text processing, concatenation, or comparison. - `jsonb_extract_path` - The `jsonb_extract_path` function works with the `jsonb` data type, which offers a binary representation of `JSON` data. This alternative function is generally faster than `json_extract_path` for most operations, as it's optimized for the binary `jsonb` format. This difference in performance is often more pronounced with larger `JSON` structures and frequent path extractions. {/* This example does not work. It returns empty values. ### Invalid paths `json_extract_path` handles an invalid path by returning `NULL`, as in the following example: ```sql SELECT id, json_extract_path(attributes, 'speks') as storage_type FROM products; ``` The query above, which specifies an invalid path (`'speks'` instead of `'specs'`), returns `NULL` as shown: ```text | id | storage_type | |----|--------------| | 1 | (null) | | 2 | (null) | | 3 | (null) | ``` */} ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-json_extract_path_text.txt # Postgres json_extract_path_text() Function > The document details the usage of the `json_extract_path_text()` function in PostgreSQL within Neon's database environment, explaining how to extract text from a JSON object using a specified path. ## Source - [Postgres json_extract_path_text() Function HTML](https://neon.com/docs/functions/json_extract_path_text): The original HTML version of this documentation The `json_extract_path_text` function is designed to simplify extracting text from `JSON` data in Postgres. This function is similar to `json_extract_path` — it also produces the value at the specified path from a `JSON` object but casts it to plain text before returning. This makes it more straightforward for text manipulation and comparison operations. ## Function signature ```sql json_extract_path_text(from_json json, VARIADIC path_elems text[]) -> TEXT ``` The function accepts a `JSON` object and a variadic list of elements that specify the path to the desired value. ## Example usage Let's consider a `users` table with a `JSON` column named `profile` containing various user details. Here's how we can create the table and insert some sample data: ```sql CREATE TABLE users ( id INT, profile JSON ); INSERT INTO users (id, profile) VALUES (1, '{"name": "Alice", "contact": {"email": "alice@example.com", "phone": "1234567890"}, "hobbies": ["reading", "cycling", "hiking"]}'), (2, '{"name": "Bob", "contact": {"email": "bob@example.com", "phone": "0987654321"}, "hobbies": ["gaming", "cooking"]}'); ``` To extract and view the email addresses of all users, we can run the following query: ```sql SELECT id, json_extract_path_text(profile, 'contact', 'email') as email FROM users; ``` This query returns the following: ```text | id | email | |----|--------------------| | 1 | alice@example.com | | 2 | bob@example.com | ``` ## Advanced examples ### Use `json_extract_path_text` in Joins Let's say we have another table, `hobbies`, that includes additional information such as difficulty level and the average cost to practice each hobby. We can create the `hobbies` table with some sample data with the following statements: ```sql CREATE TABLE hobbies ( hobby_id SERIAL PRIMARY KEY, hobby_name VARCHAR(255), difficulty_level VARCHAR(50), average_cost VARCHAR(50) ); INSERT INTO hobbies (hobby_name, difficulty_level, average_cost) VALUES ('Reading', 'Easy', 'Low'), ('Cycling', 'Moderate', 'Medium'), ('Gaming', 'Variable', 'High'), ('Cooking', 'Variable', 'Low'); ``` The `users` table we created previously has a `JSON` column named `profile` that contains information about each user's preferred hobbies. A fun exercise could be to find if a user has any hobbies that are easy to get started with. Then we can recommend they engage with it more often. To fetch this list, we can run the query below. ```sql SELECT json_extract_path_text(u.profile, 'name') as user_name, h.hobby_name FROM users u JOIN hobbies h ON json_extract_path_text(u.profile, 'hobbies') LIKE '%' || lower(h.hobby_name) || '%' WHERE h.difficulty_level = 'Easy'; ``` We use `json_extract_path_text` to extract the list of hobbies for each user, and then check if the name of an easy hobby is present in the list. This query returns the following: ```text | user_name | hobby_name | |-----------|------------| | Alice | Reading | ``` ### Extracting values from JSON arrays with `json_extract_path_text` `json_extract_path_text` can also be used to extract values from `JSON` arrays. For instance, to extract the first and second hobbies for everyone, we can run the following query: ```sql SELECT json_extract_path_text(profile, 'name') as name, json_extract_path_text(profile, 'hobbies', '0') as first_hobby, json_extract_path_text(profile, 'hobbies', '1') as second_hobby FROM users; ``` This query returns the following: ```text | name | first_hobby | second_hobby | |-------|-------------|--------------| | Alice | reading | cycling | | Bob | gaming | cooking | ``` ## Additional considerations ### Performance and indexing Performance considerations for `json_extract_path_text` are similar to those for `json_extract_path`. It is efficient for extracting data but can be impacted by large `JSON` objects or complex queries. Indexing `JSON` fields can improve performance in some cases. ### Alternative functions - [json_extract_path](https://neon.com/docs/functions/json_extract_path) - This is a similar function that can extract data from a `JSON` object at the specified path. The difference is that it returns a `JSON` object, while `json_extract_path_text` always returns text. The right function to use depends on what you want to use the output data for. - [jsonb_extract_path_text](https://neon.com/docs/functions/jsonb_extract_path_text) - This is a similar function that can extract data from a `JSON` object at the specified path. It is more efficient but works only with data of the type `JSONB`. ## Resources - [PostgreSQL Documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL Documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-json_object.txt # Postgres json_object() function > The document details the usage of the Postgres `json_object()` function within Neon, explaining how to convert sets of key-value pairs into JSON objects. ## Source - [Postgres json_object() function HTML](https://neon.com/docs/functions/json_object): The original HTML version of this documentation The `json_object` function in Postgres is used to create a `JSON` object from a set of key-value pairs. It is particularly useful when you need to generate `JSON` data dynamically from existing table data or input parameters. ## Function signature ```sql json_object(keys TEXT[], values TEXT[]) -> JSON -- or -- json_object(keys_values TEXT[]) -> JSON ``` This function takes two text arrays as input: one for keys and one for values. Both arrays must have the same number of elements, as each key is paired with the corresponding value to construct the `JSON` object. Alternatively, you can pass a single text array containing both keys and values. In this case, alternate elements in the array are treated as keys and values, respectively. ## Example usage Consider a scenario where you run a library and have a table that tracks details for each book. The table with some sample data can be set up as shown: ```sql -- Test database table for a bookstore inventory CREATE TABLE book_inventory ( book_id INT, title TEXT, author TEXT, price NUMERIC, genre TEXT ); -- Inserting some test data into `book_inventory` INSERT INTO book_inventory VALUES (101, 'The Great Gatsby', 'F. Scott Fitzgerald', 18.99, 'Classic'), (102, 'Invisible Man', 'Ralph Ellison', 15.99, 'Novel'); ``` When querying this dataset, the frontend client might want to present the data in a different way. Say you want the catalog information just as the list of book names while combining the rest of the fields into a single `metadata` attribute. You can do so as shown here: ```sql SELECT book_id, title, json_object( ARRAY['author', 'genre'], ARRAY[author, genre] ) AS metadata FROM book_inventory; ``` This query returns the following result: ```text | book_id | title | metadata | |---------|------------------|--------------------------------------------| | 101 | The Great Gatsby | {"author" : "F. Scott Fitzgerald", | | | | "genre" : "Classic"} | | 102 | Invisible Man | {"author" : "Ralph Ellison", | | | | "genre" : "Novel"} | ``` ## Advanced examples ### Creating nested JSON objects with `json_object` You could use `json_object` to create nested `JSON` objects for representing more complex data. However, since `json_object` only expects text values for each key, we will need to combine it with other `JSON` functions like `json_build_object`. For example: ```sql SELECT json_build_object( 'title', title, 'author', json_object(ARRAY['name', 'genre'], ARRAY[author, genre]) ) AS book_info FROM book_inventory; ``` This query returns the following result: ```text | book_info | |--------------------------------------------------------------------------------------------------| | {"title" : "The Great Gatsby", "author" : {"name" : "F. Scott Fitzgerald", "genre" : "Classic"}} | | {"title" : "Invisible Man", "author" : {"name" : "Ralph Ellison", "genre" : "Novel"}} | ``` ## Additional considerations ### Gotchas and footguns - Ensure both keys and values arrays have the same number of elements. Mismatched arrays will result in an error. Or, if passing in a single key-value array, ensure that the array has an even number of elements. - Be aware of data type conversions. Since `json_object` expects text arrays, you may need to explicitly cast non-text data types to text. ### Alternative functions - [jsonb_object](https://www.postgresql.org/docs/current/functions-json.html) - Same functionality as `json_object`, but returns a `JSONB` object instead of `JSON`. - [row_to_json](https://www.postgresql.org/docs/current/functions-json.html) - It can be used to create a `JSON` object from a table row (or a row of a composite type) without needing to specify keys and values explicitly. Although, it is less flexible than `json_object` since all fields in the row are included in the `JSON` object. - [json_build_object](https://neon.com/docs/functions/json_build_object) - Similar to `json_object`, but allows for more flexibility in constructing the `JSON` object, as it can take a variable number of arguments in the form of key-value pairs. - [json_object_agg](https://www.postgresql.org/docs/current/functions-json.html) - It is used to aggregate the key-value pairs from multiple rows into a single `JSON` object. In contrast, `json_object` outputs a `JSON` object for each row. ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_populate_record.txt # Postgres json_populate_record() function > The document explains the usage of the `json_populate_record()` function in PostgreSQL, detailing how it converts JSON data into a PostgreSQL record, specifically for Neon database users. ## Source - [Postgres json_populate_record() function HTML](https://neon.com/docs/functions/json_populate_record): The original HTML version of this documentation The `json_populate_record` function is used to populate a record type with values from a `JSON` object. It is useful for parsing `JSON` data received from external sources, particularly when merging it into an existing record. ## Function signature ```sql json_populate_record(base_record ANYELEMENT, json JSON) ``` This function takes two arguments: a base record of a row type (which can even be a `NULL` record) and a `JSON` object. It returns the record updated with the `JSON` values. ## Example usage Consider a database table that tracks employee information. When you receive employee information as `JSON` records, you can use `json_populate_record` to ingest the data into the table. Here we create the `employees` table with some sample data. ```sql CREATE TABLE employees ( id INT, name TEXT, department TEXT, salary NUMERIC ); ``` To illustrate, we start with a `NULL` record and cast the input `JSON` payload to the `employees` record type. ```sql INSERT INTO employees SELECT * FROM json_populate_record( NULL::employees, '{"id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000"}' ) RETURNING *; ``` This query returns the following result: ```text | id | name | department | salary | |----|----------|-------------|--------| | 123| John Doe | Engineering | 75000 | ``` ## Advanced examples ### Handling partial data with `json_populate_record` For data points where the `JSON` objects have missing keys, `json_populate_record` can still cast them into legible records. Say we receive records for a bunch of employees who are known to be in Sales, but the `department` field is missing from the `JSON` payload. We can use `json_populate_record` with the default value specified for a field while the other fields are populated from the `JSON` payload, as in this example: ```sql INSERT INTO employees SELECT * FROM json_populate_record( (1, 'ABC', 'Sales', 0)::employees, '{"id": "124", "name": "Jane Smith", "salary": "68000"}' ) RETURNING *; ``` This query returns the following: ```text | id | name | department | salary | |----|------------|------------|--------| | 124| Jane Smith | Sales | 68000 | ``` ### Working with custom types in `json_populate_record` The base record doesn't need to have the type of a table row and can be a [custom Postgres type](https://www.postgresql.org/docs/current/sql-createtype.html) too. For example, here we first define a custom type `address` and use `json_populate_record` to cast a `JSON` object to it: ```sql CREATE TYPE address AS ( street TEXT, city TEXT, zip TEXT ); SELECT * FROM json_populate_record( NULL::address, '{"street": "123 Main St", "city": "San Francisco", "zip": "94105"}' ); ``` This query returns the following result: ```text | street | city | zip | |------------|---------------|-------| | 123 Main St| San Francisco | 94105 | ``` ## Additional considerations ### Alternative options - [json_to_record](https://neon.com/docs/functions/json_to_record) - It can be used similarly, with a couple differences. `json_populate_record` can be used with a base record of a pre-defined type, whereas `json_to_record` needs the record type defined inline in the `AS` clause. Further, `json_populate_record` can specify default values for missing fields through the base record, whereas `json_to_record` must assign them NULL values. - `json_populate_recordset` - It can be used similarly to parse `JSON`, the difference being that it returns a set of records instead of a single record. For example, if you have an array of `JSON` objects, you can use `json_populate_recordset` to convert each object into a new row. - [jsonb_populate_record](https://neon.com/docs/functions/jsonb_populate_record) - It has the same functionality to `json_populate_record`, but accepts `JSONB` input instead of `JSON`. ## Resources - [Postgres documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_query.txt # Postgres JSON_QUERY() Function > The document details the usage of the Postgres JSON_QUERY() function within Neon, explaining how to extract JSON data from a specified JSON or JSONB column in a PostgreSQL database. ## Source - [Postgres JSON_QUERY() Function HTML](https://neon.com/docs/functions/json_query): The original HTML version of this documentation The `JSON_QUERY()` function introduced in PostgreSQL 17 provides a powerful way to extract and transform `JSON` values using `SQL/JSON` path expressions. This function offers fine-grained control over how `JSON` values are extracted and formatted in the results. Use `JSON_QUERY()` when you need to: - Extract specific values from complex `JSON` structures - Handle multiple values in results - Control `JSON` string formatting - Handle empty results and errors gracefully ## Function signature The `JSON_QUERY()` function uses the following syntax: ```sql JSON_QUERY( context_item, -- Input JSON/JSONB data path_expression -- SQL/JSON path expression [ PASSING { value AS varname } [, ...] ] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ] WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ] [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON ERROR ] ) → jsonb ``` ## Understanding Wrappers and Quotes ### Wrapper Behavior By default, `JSON_QUERY()` does not wrap results (equivalent to `WITHOUT WRAPPER`). There are three wrapper modes: 1. `WITHOUT WRAPPER` (default): - Returns unwrapped values - Throws an error if multiple values are returned 2. `WITH UNCONDITIONAL WRAPPER` (same as `WITH WRAPPER`): - Always wraps results in an array - Even single values are wrapped 3. `WITH CONDITIONAL WRAPPER`: - Only wraps results when multiple values are present - Single values remain unwrapped ### Quote Behavior For scalar string results: - By default, values are surrounded by quotes (making them valid `JSON`) - `KEEP QUOTES`: Explicitly keeps quotes (same as default) - `OMIT QUOTES`: Removes quotes from the result - Cannot use `OMIT QUOTES` with any `WITH WRAPPER` option ## Example usage Let's explore these behaviors using a sample dataset: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, data JSONB ); INSERT INTO users (data) VALUES ('{ "profile": { "name": "John Doe", "contacts": { "email": ["john@example.com", "john.doe@work.com"], "phone": "+1-555-0123" } } }'); ``` ### Working with single values ```sql -- Default behavior (unwrapped, quoted) SELECT JSON_QUERY( data, '$.profile.contacts.email[0]' ) FROM users; ``` ```text # | json_query ------------------------ 1 | "john@example.com" ``` ```sql -- With unconditional wrapper SELECT JSON_QUERY( data, '$.profile.contacts.email[0]' WITH WRAPPER ) FROM users; ``` ```text # | json_query ------------------------ 1 | ["john@example.com"] ``` ### Working with multiple values ```sql -- Must use wrapper for multiple values SELECT JSON_QUERY( data, '$.profile.contacts.email[*]' WITH WRAPPER ) FROM users; ``` ```text # | json_query ----------------------------------------------------- 1 | ["john@example.com", "john.doe@work.com"] ``` ```sql -- This will error (multiple values without wrapper) SELECT JSON_QUERY( data, '$.profile.contacts.email[*]' ERROR ON ERROR ) FROM users; ``` ```text ERROR: JSON path expression in JSON_QUERY should return single item without wrapper (SQLSTATE 22034) HINT: Use the WITH WRAPPER clause to wrap SQL/JSON items into an array. ``` ### Using conditional wrapper ```sql -- Single value with conditional wrapper SELECT JSON_QUERY( data, '$.profile.contacts.phone' WITH CONDITIONAL WRAPPER ) FROM users; ``` ```text # | json_query ------------------- 1 | "+1-555-0123" ``` ```sql -- Multiple values with conditional wrapper SELECT JSON_QUERY( data, '$.profile.contacts.email[*]' WITH CONDITIONAL WRAPPER ) FROM users; ``` ```text # | json_query ----------------------------------------------------- 1 | ["john@example.com", "john.doe@work.com"] ``` ### Quote handling ```sql -- Default (quoted) SELECT JSON_QUERY( data, '$.profile.contacts.phone' ) FROM users; ``` ```text # | json_query ------------------- 1 | "+1-555-0123" ``` ```sql -- Without quotes (must not use with wrapper) SELECT JSON_QUERY( data, '$.profile.contacts.phone' RETURNING TEXT OMIT quotes ) FROM users; ``` ```text # | json_query ------------- 1 | +1-555-0123 ``` ### Using the PASSING clause ```sql -- Extract array element using a variable SELECT JSON_QUERY( '[1, [2, 3], null]', 'lax $[*][$idx]' PASSING 1 AS idx WITH CONDITIONAL WRAPPER ); ``` ```text # | json_query ------------- 1 | 3 ``` ### Handling empty results ```sql -- Return custom value when path doesn't match SELECT JSON_QUERY( '{"a": 1}', '$.b' DEFAULT '{"status": "not_found"}' ON EMPTY ); ``` ```text # | json_query -------------------------------- 1 | {"status": "not_found"} ``` ```sql -- Return empty array when path doesn't match SELECT JSON_QUERY( '{"a": 1}', '$.b[*]' EMPTY ARRAY ON EMPTY ); ``` ```text # | json_query ------------- 1 | [] ``` ### Error handling examples ```sql -- Handle type conversion errors SELECT JSON_QUERY( '{"value": "not_a_number"}', '$.value' RETURNING numeric NULL ON ERROR ); ``` ```text # | json_query ------------- 1 | ``` ```sql -- Raise error on invalid path SELECT JSON_QUERY( '{"a": 1}', 'invalid_path' ERROR ON ERROR ); ``` ```text ERROR: syntax error at end of jsonpath input (SQLSTATE 42601) ``` ## Common use cases ### Data transformation ```sql -- Transform and validate JSON data CREATE TABLE events ( id SERIAL PRIMARY KEY, event_data JSONB ); INSERT INTO events (event_data) VALUES ('{ "type": "user_login", "timestamp": "2024-12-04T10:30:00Z", "details": { "user_id": "U123", "device": "mobile", "location": {"city": "London", "country": "UK"} } }'); -- Extract specific fields with custom formatting SELECT JSON_QUERY(event_data, '$.type' RETURNING TEXT OMIT QUOTES) as event_type, JSON_QUERY(event_data, '$.details.location' WITH WRAPPER) as location FROM events; ``` ```text # | event_type | location ------------------------------------- 1 | user_login | [{"city": "London", "country": "UK"}] ``` ## Performance considerations 1. Use appropriate options: - Use `RETURNING TEXT` with `OMIT QUOTES` when JSON formatting is not required - Choose `CONDITIONAL WRAPPER` over `UNCONDITIONAL` when possible - Consider using `DEFAULT` expressions for better error recovery 2. Optimization tips: - Create indexes on frequently queried `JSON` paths - Use specific path expressions instead of wildcards when possible ## Learn more - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) - [SQL/JSON path language](https://www.postgresql.org/docs/current/datatype-json.html#DATATYPE-JSONPATH) --- # Source: https://neon.com/llms/functions-json_scalar.txt # Postgres json_scalar() Function > The document details the `json_scalar()` function in Neon, explaining its usage for converting JSON data into scalar values within PostgreSQL databases. ## Source - [Postgres json_scalar() Function HTML](https://neon.com/docs/functions/json_scalar): The original HTML version of this documentation The `json_scalar()` function introduced in PostgreSQL 17 provides a straightforward way to convert `SQL` scalar values into their `JSON` equivalents. This function is particularly useful when you need to ensure proper type conversion and formatting of individual values for `JSON` output. Use `json_scalar()` when you need to: - Convert `SQL` numbers to `JSON` numbers - Format timestamps as JSON strings - Convert `SQL` booleans to `JSON` booleans - Ensure proper null handling in `JSON` context ## Function signature The `json_scalar()` function uses the following syntax: ```sql json_scalar(expression) → json ``` Parameters: - `expression`: Any `SQL` scalar value to be converted to a `JSON` scalar value ## Example usage Let's explore various ways to use the `json_scalar()` function with different types of input values. ### Numeric values ```sql -- Convert integer SELECT json_scalar(42); ``` ```text # | json_scalar --------------- 1 | 42 ``` ```sql -- Convert floating-point number SELECT json_scalar(123.45); ``` ```text # | json_scalar --------------- 1 | 123.45 ``` ### String values ```sql -- Convert text SELECT json_scalar('Hello, World!'); ``` ```text # | json_scalar -------------------- 1 | "Hello, World!" ``` ### Date and timestamp values ```sql -- Convert timestamp SELECT json_scalar(CURRENT_TIMESTAMP); ``` ```text # | json_scalar --------------------------------------- 1 | "2024-12-04T06:19:14.458444+00:00" ``` ```sql -- Convert date SELECT json_scalar(CURRENT_DATE); ``` ```text # | json_scalar ---------------- 1 | "2024-12-04" ``` ### Boolean values ```sql -- Convert boolean true SELECT json_scalar(true); ``` ```text # | json_scalar -------------- 1 | true ``` ### NULL handling ```sql -- Convert NULL value SELECT json_scalar(NULL); ``` ```text # | json_scalar -------------- 1 | ``` ## Common use cases ### Building JSON objects ```sql -- Create a JSON object with properly formatted values CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT, created_at TIMESTAMP WITH TIME ZONE ); INSERT INTO users (name, created_at) VALUES ('Alice', '2024-12-04T14:30:45.000000+00:00'), ('Bob', '2024-12-04T15:30:45.000000+00:00'); SELECT json_build_object( 'id', json_scalar(id), 'name', json_scalar(name), 'created_at', json_scalar(created_at) ) FROM users; ``` ```text # | json_build_object ----------------------------------------------------------------------------------- 1 | {"id" : 3, "name" : "Alice", "created_at" : "2024-12-04T14:30:45.000000+00:00"} 2 | {"id" : 4, "name" : "Bob", "created_at" : "2024-12-04T15:30:45.000000+00:00"} ``` ### Data type conversion ```sql -- Convert mixed data types in a single query SELECT json_build_array( json_scalar(42), json_scalar('text'), json_scalar(CURRENT_TIMESTAMP), json_scalar(NULL) ); ``` ```text # | json_build_array ---------------------------------------------------------- 1 | [42, "text", "2024-12-04T06:25:29.928376+00:00", null] ``` ## Type conversion rules The function follows these conversion rules: 1. `NULL` -> `SQL NULL` 2. Numbers → JSON numbers (preserving exact value) 3. Booleans → JSON booleans 4. All other types → JSON strings with appropriate formatting: - Timestamps include timezone when available - Text is properly escaped according to JSON standards ## Learn more - [json_build_object() function documentation](https://neon.com/docs/functions/json_build_object) - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL data type formatting](https://www.postgresql.org/docs/current/datatype.html) --- # Source: https://neon.com/llms/functions-json_serialize.txt # Postgres json_serialize() Function > The document details the `json_serialize()` function in Neon, explaining its usage for converting PostgreSQL data types into JSON format for efficient data handling and storage. ## Source - [Postgres json_serialize() Function HTML](https://neon.com/docs/functions/json_serialize): The original HTML version of this documentation The `json_serialize()` function introduced in PostgreSQL 17 provides a flexible way to convert `JSON` values into text or binary format. This function is particularly useful when you need to control the output format of `JSON` data or prepare it for transmission or storage in specific formats. Use `json_serialize()` when you need to: - Convert `JSON` values to specific text formats - Transform `JSON` into binary representation - Ensure consistent `JSON` string formatting - Prepare `JSON` data for external systems or storage ## Function signature The `json_serialize()` function uses the following syntax: ```sql json_serialize( expression -- Input JSON expression [ FORMAT JSON [ ENCODING UTF8 ] ] -- Optional input format specification [ RETURNING data_type -- Optional return type specification [ FORMAT JSON [ ENCODING UTF8 ] ] ] -- Optional output format specification ) → text | bytea ``` Parameters: - `expression`: Input `JSON` value or expression to serialize - `FORMAT JSON`: Explicitly specifies `JSON` format for input (optional) - `ENCODING UTF8`: Specifies `UTF8` encoding for input/output (optional) - `RETURNING data_type`: Specifies the desired output type (optional, defaults to text) ## Example usage Let's explore various ways to use the `json_serialize()` function with different inputs and output formats. ### Basic serialization ```sql -- Serialize a simple JSON object to text SELECT json_serialize('{"name": "Alice", "age": 30}'); ``` ```text # | json_serialize -------------------------------- 1 | {"name": "Alice", "age": 30} ``` ```sql -- Serialize a JSON array SELECT json_serialize('[1, 2, 3, "four", true, null]'); ``` ```text # | json_serialize ---------------------------------- 1 | [1, 2, 3, "four", true, null] ``` ### Binary serialization ```sql -- Convert JSON to binary format SELECT json_serialize( '{"id": 1, "data": "test"}' RETURNING bytea ); ``` ```text # | json_serialize -------------------------------------------------------- 1 | \x7b226964223a20312c202264617461223a202274657374227d ``` ### Working with complex structures ```sql -- Serialize nested JSON structures SELECT json_serialize('{ "user": { "name": "Bob", "settings": { "theme": "dark", "notifications": true }, "tags": ["admin", "active"] } }'); ``` ```text # | json_serialize --------------------------------------------------------------------------------------------------------------------- 1 | { "user": { "name": "Bob", "settings": { "theme": "dark", "notifications": true }, "tags": ["admin", "active"] } } ``` ## Comparison with `json()` function While both `json_serialize()` and `json()` work with `JSON` data, they serve different purposes: - `json()` converts text or binary data into `JSON` values - `json_serialize()` converts `JSON` values into text or binary format - `json()` focuses on input validation (e.g., `WITH UNIQUE` keys) - `json_serialize()` focuses on output format control Think of them as complementary functions: ```sql -- json() for input conversion SELECT json('{"name": "Alice"}'); -- Text to JSON -- json_serialize() for output conversion SELECT json_serialize('{"name": "Alice"}'::json); -- JSON to Text ``` ## Common use cases ### Data export preparation ```sql -- Create a table with JSON data CREATE TABLE events ( id SERIAL PRIMARY KEY, event_data json ); -- Insert sample data INSERT INTO events (event_data) VALUES ('{"type": "login", "user_id": 123}'), ('{"type": "purchase", "amount": 99.99}'); -- Export data in specific format SELECT id, json_serialize(event_data RETURNING text) FROM events; ``` ## Error handling The function handles various error conditions: ```sql -- Invalid JSON input (raises error) SELECT json_serialize('{"invalid": }'); ``` ```text ERROR: invalid input syntax for type json (SQLSTATE 22P02) ``` ## Learn more - [json() function documentation](https://neon.com/docs/functions/json) - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL data type formatting functions](https://www.postgresql.org/docs/current/functions-formatting.html) --- # Source: https://neon.com/llms/functions-json_table.txt # Postgres JSON_TABLE() function > The document explains the usage of the Postgres `JSON_TABLE()` function within Neon, detailing its syntax and application for transforming JSON data into a relational format. ## Source - [Postgres JSON_TABLE() function HTML](https://neon.com/docs/functions/json_table): The original HTML version of this documentation The `JSON_TABLE` function transforms JSON data into relational views, allowing you to query JSON data using standard SQL operations. Added in PostgreSQL 17, this feature helps you work with complex JSON data by presenting it as a virtual table which you can access with regular SQL queries. Use `JSON_TABLE` when you need to: - Extract specific fields from complex JSON structures - Convert JSON arrays into rows - Join JSON data with regular tables - Apply SQL operations like filtering and aggregation to JSON data ## Function signature `JSON_TABLE` uses the following syntax: ```sql JSON_TABLE( json_doc, -- JSON/JSONB input path_expression -- SQL/JSON path expression COLUMNS ( column_definition [, ...] ) ) AS alias ``` Parameters: - `json_doc`: JSON or JSONB data to process - `path_expression`: SQL/JSON path expression that identifies rows to generate - `COLUMNS`: Defines the schema of the virtual table - `column_definition`: Specifies how to extract values for each column - `alias`: Name for the resulting virtual table ## Example usage Let's explore `JSON_TABLE` using a library management system example. We'll store book information including reviews, borrowing history, and metadata in JSON format. ### Create a test database ```sql -- Test database table for a library management system CREATE TABLE library_books ( book_id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, data JSONB NOT NULL ); -- Insert sample data INSERT INTO library_books (title, data) VALUES ( 'The Art of Programming', '{ "isbn": "978-0123456789", "author": { "name": "Jane Smith", "email": "jane.smith@example.com" }, "publication": { "year": 2023, "publisher": "Tech Books Inc" }, "metadata": { "genres": ["Programming", "Computer Science"], "tags": ["algorithms", "python", "best practices"], "edition": "2nd" }, "reviews": [ { "user": "john_doe", "rating": 5, "comment": "Excellent book for beginners!", "date": "2024-01-15" }, { "user": "mary_jane", "rating": 4, "comment": "Good examples, could use more exercises", "date": "2024-02-20" } ], "borrowing_history": [ { "user_id": "U123", "checkout_date": "2024-01-01", "return_date": "2024-01-15", "condition": "good" }, { "user_id": "U456", "checkout_date": "2024-02-01", "return_date": "2024-02-15", "condition": "fair" } ] }'::jsonb ), ( 'Database Design Fundamentals', '{ "isbn": "978-0987654321", "author": { "name": "Robert Johnson", "email": "robert.j@example.com" }, "publication": { "year": 2024, "publisher": "Database Press" }, "metadata": { "genres": ["Database", "Computer Science"], "tags": ["SQL", "design patterns", "normalization"], "edition": "1st" }, "reviews": [ { "user": "alice_wonder", "rating": 5, "comment": "Comprehensive coverage of database concepts", "date": "2024-03-01" } ], "borrowing_history": [ { "user_id": "U789", "checkout_date": "2024-03-01", "return_date": null, "condition": "excellent" } ] }'::jsonb ); ``` ### Query examples #### Extract basic book information This query extracts core book details from the JSON structure into a relational format. ```sql SELECT b.book_id, b.title, jt.* FROM library_books b, JSON_TABLE( data, '$' COLUMNS ( isbn text PATH '$.isbn', author_name text PATH '$.author.name', publisher text PATH '$.publication.publisher', pub_year int PATH '$.publication.year' ) ) AS jt; ``` Result: | book_id | title | isbn | author_name | publisher | pub_year | | ------- | ---------------------------- | -------------- | -------------- | -------------- | -------- | | 1 | The Art of Programming | 978-0123456789 | Jane Smith | Tech Books Inc | 2023 | | 2 | Database Design Fundamentals | 978-0987654321 | Robert Johnson | Database Press | 2024 | #### Analyze book reviews This query flattens the reviews array into rows, making it easy to analyze reader feedback. ```sql SELECT b.title, jt.* FROM library_books b, JSON_TABLE( data, '$.reviews[*]' COLUMNS ( reviewer text PATH '$.user', rating int PATH '$.rating', review_date date PATH '$.date', comment text PATH '$.comment' ) ) AS jt ORDER BY review_date DESC; ``` Result: | title | reviewer | rating | review_date | comment | | ---------------------------- | ------------ | ------ | ----------- | ------------------------------------------- | | Database Design Fundamentals | alice_wonder | 5 | 2024-03-01 | Comprehensive coverage of database concepts | | The Art of Programming | mary_jane | 4 | 2024-02-20 | Good examples, could use more exercises | | The Art of Programming | john_doe | 5 | 2024-01-15 | Excellent book for beginners! | #### Track borrowing history This query helps track book loans and current borrowing status. ```sql WITH book_loans AS ( SELECT b.title, jt.* FROM library_books b, JSON_TABLE( data, '$.borrowing_history[*]' COLUMNS ( user_id text PATH '$.user_id', checkout_date date PATH '$.checkout_date', return_date date PATH '$.return_date', condition text PATH '$.condition' ) ) AS jt ) SELECT title, user_id, checkout_date, COALESCE(return_date::text, 'Still borrowed') as return_status, condition FROM book_loans ORDER BY checkout_date DESC; ``` Result: | title | user_id | checkout_date | return_status | condition | | ---------------------------- | ------- | ------------- | -------------- | --------- | | Database Design Fundamentals | U789 | 2024-03-01 | Still borrowed | excellent | | The Art of Programming | U456 | 2024-02-01 | 2024-02-15 | fair | | The Art of Programming | U123 | 2024-01-01 | 2024-01-15 | good | ### Advanced usage #### Aggregate review data Use this query to calculate review statistics for each book. ```sql WITH book_ratings AS ( SELECT b.title, jt.rating FROM library_books b, JSON_TABLE( data, '$.reviews[*]' COLUMNS ( rating int PATH '$.rating' ) ) AS jt ) SELECT title, COUNT(*) as num_reviews, ROUND(AVG(rating), 2) as avg_rating, MIN(rating) as min_rating, MAX(rating) as max_rating FROM book_ratings GROUP BY title; ``` Result | title | num_reviews | avg_rating | min_rating | max_rating | | ---------------------------- | ----------- | ---------- | ---------- | ---------- | | Database Design Fundamentals | 1 | 5.00 | 5 | 5 | | The Art of Programming | 2 | 4.50 | 4 | 5 | #### Process arrays and metadata This query extracts array fields and metadata into queryable columns. ```sql SELECT b.title, jt.* FROM library_books b, JSON_TABLE( data, '$' COLUMNS ( genres json FORMAT JSON PATH '$.metadata.genres', tags json FORMAT JSON PATH '$.metadata.tags', edition text PATH '$.metadata.edition' ) ) AS jt; ``` Result: | title | genres | tags | edition | | ---------------------------- | ----------------------------------- | ------------------------------------------- | ------- | | The Art of Programming | ["Programming", "Computer Science"] | ["algorithms", "python", "best practices"] | 2nd | | Database Design Fundamentals | ["Database", "Computer Science"] | ["SQL", "design patterns", "normalization"] | 1st | ## Error handling `JSON_TABLE` returns NULL for missing values by default. You can modify this behavior with error handling clauses: ```sql SELECT title, jt.* FROM library_books, JSON_TABLE( data, '$' COLUMNS ( author_name text PATH '$.author.name', metadata TEXT PATH '$.metadata' DEFAULT '{}' ON ERROR, edition text PATH '$.metadata.edition' DEFAULT 'Unknown' ON EMPTY DEFAULT 'Unknown' ON ERROR ) ) AS jt; ``` This example shows how to handle errors when extracting JSON data. There is an error here because the `metadata` field is not of type `TEXT`. | title | author_name | metadata | edition | | ---------------------------- | -------------- | -------- | ------- | | The Art of Programming | Jane Smith | \{\} | 2nd | | Database Design Fundamentals | Robert Johnson | \{\} | 1st | ## Performance tips 1. Create GIN indexes on JSONB columns: ```sql CREATE INDEX idx_library_books_data ON library_books USING GIN (data); ``` 2. Consider these optimizations: - Place filters on regular columns before JSON operations - Use JSON operators (`->`, `->>`, `@>`) when possible - Materialize frequently accessed JSON paths into regular columns - Break large JSON documents into smaller pieces to manage memory usage ## Learn more - [PostgreSQL JSON_TABLE documentation](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-TABLE) - [PostgreSQL JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_to_record.txt # Postgres json_to_record() function > The document explains the usage of the Postgres `json_to_record()` function within Neon, detailing how it converts JSON data into a set of columns, facilitating structured data retrieval from JSON objects. ## Source - [Postgres json_to_record() function HTML](https://neon.com/docs/functions/json_to_record): The original HTML version of this documentation You can use the `json_to_record` function to convert a top-level `JSON` object into a row, with the type specified by the `AS` clause. This function is useful when you need to parse `JSON` data received from external sources, such as APIs or file uploads, and store it in a structured format. By using `json_to_record`, you can easily extract values from `JSON` and map them to the corresponding columns in your database table. ## Function signature ```sql json_to_record(json JSON) AS (column_name column_type [, ...]) ``` The function's definition includes a column definition list, where you specify the name and data type of each column in the resulting record. ## Example usage Consider a scenario in which you have `JSON` data representing employee information, and you want to ingest it for easier processing later. The `JSON` data looks like this: ```json { "id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000" } ``` The table you want to insert data into is defined as follows: ```sql CREATE TABLE employees ( id INT, name TEXT, department TEXT, salary NUMERIC ); ``` Using `json_to_record`, you can insert the input data into the `employees` table as shown: ```sql INSERT INTO employees SELECT * FROM json_to_record('{"id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000"}') AS x(id INT, name TEXT, department TEXT, salary NUMERIC); ``` To verify the data was inserted, you can run the following query: ```sql SELECT * FROM employees; ``` This query returns the following result: ```text | id | name | department | salary | |----|----------|--------------|--------| | 123| John Doe | Engineering | 75000 | ``` ## Advanced examples This section provides advanced `json_to_record` examples. ### Handling partial data with `json_to_record` For datapoints where the `JSON` objects have missing keys, `json_to_record` can still cast them into records, producing `NULL` values for the unmatched columns. For example: ```sql INSERT INTO employees SELECT * FROM json_to_record('{ "id": "124", "name": "Jane Smith" }') AS x(id INT, name TEXT, department TEXT, salary NUMERIC) RETURNING *; ``` This query returns the following result: ``` | id | name | department | salary | |----|------------|--------------|--------| | 124| Jane Smith | | | ``` ### Handling nested data with `json_to_record` `json_to_record` can also be used to handle nested `JSON` input data (i.e., keys with values that are `JSON` objects themselves). You need to first define a [custom Postgres type](https://www.postgresql.org/docs/current/sql-createtype.html). The newly created type can then be used in the column definition list along with the other columns. In the following example, we handle the `address` field by creating an `ADDRESS_TYPE` type first. ```sql CREATE TYPE ADDRESS_TYPE AS ( street TEXT, city TEXT ); SELECT * FROM json_to_record('{ "id": "125", "name": "Emily Clark", "department": "Marketing", "salary": "68000", "address": { "street": "123 Elm St", "city": "Springfield" } }') AS x(id INT, name TEXT, department TEXT, salary NUMERIC, address ADDRESS_TYPE); ``` This query returns the following result: ```text | id | name | department | salary | address | |----|-------------|------------|--------|-----------------------------| | 1 | Emily Clark | Marketing | 68000 | ("123 Elm St", Springfield) | ``` ### Alternative functions - [json_populate_record](https://neon.com/docs/functions/json_populate_record): This function can also be used to create records using values from a `JSON` object. The difference is that `json_populate_record` requires the record type to be defined beforehand, while `json_to_record` needs the type definition inline. - [json_to_recordset](https://www.postgresql.org/docs/current/functions-json.html): This function can be used similarly to parse `JSON`, the difference being that it returns a set of records instead of a single record. For example, if you have an array of `JSON` objects, you can use `json_to_recordset` to convert each object into a new row. - [jsonb_to_record](https://www.postgresql.org/docs/current/functions-json.html): This function provides the same functionality as `json_to_record`, but accepts `JSONB` input instead of `JSON`. In cases where the input payload type isn't exactly specified, either of the two functions can be used. For example, take this `json_to_record` query: ```sql SELECT * FROM json_to_record('{"id": "123", "name": "John Doe", "department": "Engineering"}') AS x(id INT, name TEXT, department TEXT); ``` It works just as well as this `JSONB` variant (below) since Postgres casts the literal `JSON` object to `JSON` or `JSONB` depending on the context. ```sql SELECT * FROM jsonb_to_record('{"id": "123", "name": "Sally", "department": "Engineering"}') AS x(id INT, name TEXT, department TEXT); ``` ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-json_value.txt # Postgres JSON_VALUE() Function > The document details the usage of the Postgres JSON_VALUE() function in Neon, explaining its syntax and application for extracting scalar values from JSON data within the database. ## Source - [Postgres JSON_VALUE() Function HTML](https://neon.com/docs/functions/json_value): The original HTML version of this documentation The `JSON_VALUE()` function introduced in PostgreSQL 17 provides a specialized way to extract single scalar values from `JSON` data with type conversion capabilities. This function is particularly useful when you need to extract and potentially convert individual values from `JSON` structures while ensuring type safety and proper error handling. Use `JSON_VALUE()` when you need to: - Extract single scalar values from `JSON` - Convert `JSON` values to specific PostgreSQL data types - Ensure strict type safety when working with `JSON` data - Handle missing or invalid `JSON` values gracefully ## Function signature The `JSON_VALUE()` function uses the following syntax: ```sql JSON_VALUE( context_item, -- JSON input path_expression -- SQL/JSON path expression [ PASSING { value AS varname } [, ...] ] [ RETURNING data_type ] -- Optional type conversion [ { ERROR | NULL | DEFAULT expression } ON EMPTY ] [ { ERROR | NULL | DEFAULT expression } ON ERROR ] ) → text ``` Parameters: - `context_item`: `JSON/JSONB` input to process - `path_expression`: `SQL/JSON` path expression that identifies the value to extract - `PASSING`: Optional clause to pass variables into the path expression - `RETURNING`: Specifies the desired output data type (defaults to text) - `ON EMPTY`: Handles cases where no value is found - `ON ERROR`: Handles extraction or conversion errors ## Example usage Let's explore various ways to use the `JSON_VALUE()` function with different scenarios and options. ### Basic value extraction ```sql -- Extract a simple string value SELECT JSON_VALUE('{"name": "Alice"}', '$.name'); ``` ```text # | json_value -------------- 1 | Alice ``` ```sql -- Extract a numeric value SELECT JSON_VALUE('{"age": 30}', '$.age'); ``` ```text # | json_value ------------- 1 | 30 ``` ### Type conversion with RETURNING ```sql -- Convert string to float SELECT JSON_VALUE( '"123.45"', '$' RETURNING float ); ``` ```text # | json_value ------------- 1 | 123.45 ``` ```sql -- Convert string to date SELECT JSON_VALUE( '"2024-12-04"', '$' RETURNING date ); ``` ```text # | json_value ------------- 1 | 2024-12-04 ``` ### Using variables with PASSING ```sql -- Extract array element using variable SELECT JSON_VALUE( '[1, 2, 3, 4, 5]', 'strict $[$index]' PASSING 2 AS index ); ``` ```text # | json_value ------------- 1 | 3 ``` ### Error handling ```sql -- Handle missing values with DEFAULT SELECT JSON_VALUE( '{"data": null}', '$.missing_field' DEFAULT 'Not Found' ON EMPTY ); ``` ```text # | json_value --------------- 1 | Not Found ``` ```sql -- Handle conversion errors SELECT JSON_VALUE( '{"value": "not a number"}', '$.value' RETURNING numeric DEFAULT 0 ON ERROR ); ``` ```text # | json_value ------------- 1 | 0 ``` ### Working with nested structures ```sql -- Extract from nested object SELECT JSON_VALUE( '{ "user": { "contact": { "email": "alice@example.com" } } }', '$.user.contact.email' ); ``` ```text # | json_value ---------------------- 1 | alice@example.com ``` ## Common use cases ### Data validation ```sql -- Validate email format CREATE TABLE user_emails ( id SERIAL PRIMARY KEY, user_data jsonb, CONSTRAINT valid_email CHECK ( JSON_VALUE(user_data, '$.email' RETURNING text) ~ '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$' ) ); -- This insert will succeed INSERT INTO user_emails (user_data) VALUES ( '{"name": "John Doe", "email": "john.doe@example.com"}' ); -- This insert will fail INSERT INTO user_emails (user_data) VALUES ( '{"name": "Alice", "email": "invalid-email"}' ); ``` ## Error handling The function provides several ways to handle errors: 1. Using `ON EMPTY`: - `ERROR`: Raises an error (default) - `NULL`: Returns `NULL` - `DEFAULT expression`: Returns specified value 2. Using `ON ERROR`: - `ERROR`: Raises an error (default) - `NULL`: Returns `NULL` - `DEFAULT expression`: Returns specified value ## JSON_VALUE vs JSON_QUERY The `JSON_VALUE()` function is designed for extracting scalar values from `JSON` data, while `JSON_QUERY()` is used for extracting `JSON` structures (objects, arrays, or scalar values). Here's a comparison of the two functions: ### Purpose and Return Types `JSON_VALUE()`: - Designed specifically for extracting scalar values (numbers, strings, booleans) - Always returns a single scalar value as text (or specified type via `RETURNING`) - Removes quotes from string values by default - Throws an error if the result is an object or array `JSON_QUERY()`: - Designed for extracting `JSON` structures (objects, arrays, or scalar values) - Returns valid `JSON/JSONB` output - Preserves quotes on string values by default - Can handle multiple values using wrapper options ### Example Comparisons ```sql -- Working with scalar string values SELECT JSON_VALUE('{"name": "Alice"}', '$.name') as value_result, JSON_QUERY('{"name": "Alice"}', '$.name') as query_result; ``` ```text # | value_result | query_result -------------------------------- 1 | Alice | "Alice" ``` ```sql -- Working with arrays (JSON_VALUE will error and give null by default) SELECT JSON_VALUE('{"tags": ["sql", "json"]}', '$.tags' NULL ON ERROR) as value_result, JSON_QUERY('{"tags": ["sql", "json"]}', '$.tags') as query_result; ``` ```text # | value_result | query_result --------------------------------------- 1 | | ["sql", "json"] ``` ## Additional considerations 1. Type safety: - Always use `RETURNING` when specific data types are expected - Implement appropriate error handling for type conversions 2. Performance considerations: - Use indexes on frequently queried `JSON` paths ## Learn more - [PostgreSQL JSON functions documentation](https://www.postgresql.org/docs/current/functions-json.html) - [SQL/JSON path language](https://www.postgresql.org/docs/current/datatype-json.html#DATATYPE-JSONPATH) --- # Source: https://neon.com/llms/functions-jsonb_array_elements.txt # Postgres jsonb_array_elements() function > The document explains the usage of the `jsonb_array_elements()` function in PostgreSQL, detailing how it can be used within Neon to expand JSONB arrays into a set of JSONB values. ## Source - [Postgres jsonb_array_elements() function HTML](https://neon.com/docs/functions/jsonb_array_elements): The original HTML version of this documentation You can use the `jsonb_array_elements` function to expand a `JSONB` array into a set of rows, each containing one element of the array. It is a simpler option compared to complex looping logic. It is also more efficient than executing the same operation on the application side by reducing data transfer and processing overhead. ## Function signature ```sql jsonb_array_elements(json) ``` ## `jsonb_array_elements` example Suppose you have a table with information about developers: **developers** ```sql CREATE TABLE developers ( id INT PRIMARY KEY, name TEXT, skills JSONB ); INSERT INTO developers (id, name, skills) VALUES (1, 'Alice', '["Java", "Python", "SQL"]'), (2, 'Bob', '["C++", "JavaScript"]'), (3, 'Charlie', '["HTML", "CSS", "React"]'); ``` ``` | id | name | skills |----|---------|--------------------------- | 1 | Alice | ["Java", "Python", "SQL"] | 2 | Bob | ["C++", "JavaScript"] | 3 | Charlie | ["HTML", "CSS", "React"] ``` Now, let's say you want to extract each individual skill from the skills `JSON` array. You can use `jsonb_array_elements` for that: ```sql SELECT id, name, skill FROM developers, jsonb_array_elements(skills) AS skill; ``` This query returns the following values: ```text | id | name | skill |----|---------|-------------- | 1 | Alice | "Java" | 1 | Alice | "Python" | 1 | Alice | "SQL" | 2 | Bob | "C++" | 2 | Bob | "JavaScript" | 3 | Charlie | "HTML" | 3 | Charlie | "CSS" | 3 | Charlie | "React" ``` ## Advanced examples This section shows advanced `jsonb_array_elements` examples. ## Filtering `jsonb_array_elements` You can use the `jsonb_array_elements` function to extract the sizes from the `JSON` data and then filter the products based on a specific color (or size): ```sql SELECT * FROM products WHERE 'Blue' IN ( SELECT REPLACE(jsonb_array_elements(details->'colors')::text, '"', '')::text ); ``` This query returns the following values: ```text | id | name | details | |----|----------|------------------------------------------------------------------------| | 1 | T-Shirt | {"sizes": ["S", "M", "L", "XL"], "colors": ["Red", "Blue", "Green"]} | | 4 | Jeans | {"sizes": ["28", "30", "32", "34"], "colors": ["Blue", "Black"]} | ``` ## Handling `NULL` in `jsonb_array_elements` This example updates the table to insert another product (`Socks`) with one of the values in the `sizes` as `null`: **products** ```text | id | name | details | |----|---------|-------------------------------------------------------------------------| | 6 | Socks | {"sizes": ["S", null, "L", "XL"], "colors": ["White", "Black", "Gray"]} | ``` ```sql INSERT INTO products (id, name, details) VALUES (6, 'Socks', '{"sizes": ["S", null, "L", "XL"], "colors": ["White", "Black", "Gray"]}'); ``` Querying for `Socks` shows how null values in an array are handled: ```sql SELECT id, name, size FROM products AS p, jsonb_array_elements(p.details -> 'sizes') AS size WHERE name = 'Socks'; ``` This query returns the following values: ``` | id | name | size | |----|-------|------| | 6 | Socks | "S" | | 6 | Socks | null | | 6 | Socks | "L" | | 6 | Socks | "XL" | ``` ### Ordering `json_array_elements` output using `WITH ORDINALITY` Let's consider a scenario where you have a table named `workflow` with a `JSONB` column `steps` representing sequential steps in a workflow: **workflow** ```sql CREATE TABLE workflow ( id SERIAL PRIMARY KEY, workflow_name TEXT, steps JSONB ); INSERT INTO workflow (workflow_name, steps) VALUES ('Employee Onboarding', '{"tasks": ["Submit Resume", "Interview", "Background Check", "Offer", "Orientation"]}'), ('Project Development', '{"tasks": ["Requirement Analysis", "Design", "Implementation", "Testing", "Deployment"]}'), ('Order Processing', '{"tasks": ["Order Received", "Payment Verification", "Packing", "Shipment", "Delivery"]}'); ``` ``` | id | workflow_name | steps | |----|---------------------|-----------------------------------------------------------------------------------------| | 1 | Employee Onboarding | {"tasks": ["Submit Resume", "Interview", "Background Check", "Offer", "Orientation"]} | | 2 | Project Development | {"tasks": ["Requirement Analysis", "Design", "Implementation", "Testing", "Deployment"]}| | 3 | Order Processing | {"tasks": ["Order Received", "Payment Verification", "Packing", "Shipment", "Delivery"]}| ``` Each workflow consists of a series of tasks, and you want to extract and display the tasks along with their order in the workflow. ```sql SELECT workflow_name, task.value AS task_name, task.ordinality AS task_order FROM workflow, jsonb_array_elements(steps->'tasks') WITH ORDINALITY AS task; ``` This query returns the following values: ``` | workflow_name | task_name | task_order | |---------------------|------------------------|------------| | Employee Onboarding | "Submit Resume" | 1 | | Employee Onboarding | "Interview" | 2 | | Employee Onboarding | "Background Check" | 3 | | Employee Onboarding | "Offer" | 4 | | Employee Onboarding | "Orientation" | 5 | | Project Development | "Requirement Analysis" | 1 | | Project Development | "Design" | 2 | | Project Development | "Implementation" | 3 | | Project Development | "Testing" | 4 | | Project Development | "Deployment" | 5 | | Order Processing | "Order Received" | 1 | | Order Processing | "Payment Verification" | 2 | | Order Processing | "Packing" | 3 | | Order Processing | "Shipment" | 4 | | Order Processing | "Delivery" | 5 | ``` ### Nested arrays in `jsonb_array_elements` You can also handle nested arrays with `jsonb_array_elements`. Consider a scenario where each product in an `electronics_products` table has multiple variants, and each variant has an array of sizes and an array of colors. **electronics_products** ```sql CREATE TABLE electronics_products ( id INTEGER PRIMARY KEY, name TEXT, details JSONB ); INSERT INTO electronics_products (id, name, details) VALUES (1, 'Laptop', '{"variants": [{"model": "A", "sizes": ["13 inch", "15 inch"], "colors": ["Silver", "Black"]}, {"model": "B", "sizes": ["15 inch", "17 inch"], "colors": ["Gray", "White"]}]}'), (2, 'Smartphone', '{"variants": [{"model": "X", "sizes": ["5.5 inch", "6 inch"], "colors": ["Black", "Gold"]}, {"model": "Y", "sizes": ["6.2 inch", "6.7 inch"], "colors": ["Blue", "Red"]}]}'); ``` ```text | id | name | details |----|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | 1 | Laptop | {"variants": [{"model": "A", "sizes": ["13 inch", "15 inch"], "colors": ["Silver", "Black"]}, {"model": "B", "sizes": ["15 inch", "17 inch"], "colors": ["Gray", "White"]}]} | 2 | Smartphone | {"variants": [{"model": "X", "sizes": ["5.5 inch", "6 inch"], "colors": ["Black", "Gold"]}, {"model": "Y", "sizes": ["6.2 inch", "6.7 inch"], "colors": ["Blue", "Red"]}]} ``` To handle the nested arrays and extract information about each variant, you can run this query using the `jsonb_array_elements` function: ```sql SELECT id, name, variant->>'model' AS model, size, color FROM electronics_products, jsonb_array_elements(details->'variants') AS variant, jsonb_array_elements_text(variant->'sizes') AS t1(size), jsonb_array_elements_text(variant->'colors') AS t2(color); ``` This query returns the following values: ```text | id | name | model | size | color | |----|------------|-------|----------|--------| | 1 | Laptop | A | 13 inch | Silver | | 1 | Laptop | A | 13 inch | Black | | 1 | Laptop | A | 15 inch | Silver | | 1 | Laptop | A | 15 inch | Black | | 1 | Laptop | B | 15 inch | Gray | | 1 | Laptop | B | 15 inch | White | | 1 | Laptop | B | 17 inch | Gray | | 1 | Laptop | B | 17 inch | White | | 2 | Smartphone | X | 5.5 inch | Black | | 2 | Smartphone | X | 5.5 inch | Gold | | 2 | Smartphone | X | 6 inch | Black | | 2 | Smartphone | X | 6 inch | Gold | | 2 | Smartphone | Y | 6.2 inch | Blue | | 2 | Smartphone | Y | 6.2 inch | Red | | 2 | Smartphone | Y | 6.7 inch | Blue | | 2 | Smartphone | Y | 6.7 inch | Red | ``` ### `jsonb_array_elements` with joins Let's assume you want to retrieve a list of users along with their roles in each organization. The data is stored in an `organizations` table and a `users` table. **organizations** ``` | id | members | |----|--------------------------------------------------------------| | 1 | [{"id": 23, "role": "admin"}, {"id": 24, "role": "default"}] | | 2 | [{"id": 23, "role": "user"}] | | 3 | [{"id": 24, "role": "admin"}, {"id": 25, "role": "default"}] | | 4 | [{"id": 25, "role": "user"}] | ``` **users** ``` | id | name | email | |-----|-------|------------------| | 23 | Max | max@gmail.com | | 24 | Joe | joe@gmail.com | | 25 | Alice | alice@gmail.com | ``` ```sql CREATE TABLE organizations ( id SERIAL PRIMARY KEY, members JSONB ); CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT, email TEXT ); INSERT INTO organizations (members) VALUES ('[{ "id": 23, "role": "admin" }, { "id": 24, "role": "default" }]'), ('[{ "id": 23, "role": "user" }]'), ('[{ "id": 24, "role": "admin" }, { "id": 25, "role": "default" }]'), ('[{ "id": 25, "role": "user" }]'); INSERT INTO users (id, name, email) VALUES (23, 'Max', 'max@gmail.com'), (24, 'Joe', 'joe@gmail.com'), (25, 'Alice', 'alice@gmail.com'); ``` You can use the `jsonb_array_elements` function to extract the `members` from the `JSONB` array in the `organizations` table and then join with the `users` table. ```sql SELECT o.id AS organization_id, u.id AS user_id, u.name AS user_name, u.email AS user_email, m->>'role' AS member_role FROM organizations o JOIN jsonb_array_elements(o.members) AS m ON true JOIN users u ON m->>'id' = u.id::TEXT; ``` This query returns the following values: ``` | organization_id | user_id | user_name | user_email | member_role | |-----------------|---------|-----------|-----------------|-------------| | 2 | 23 | Max | max@gmail.com | user | | 1 | 23 | Max | max@gmail.com | admin | | 3 | 24 | Joe | joe@gmail.com | admin | | 1 | 24 | Joe | joe@gmail.com | default | | 4 | 25 | Alice | alice@gmail.com | user | | 3 | 25 | Alice | alice@gmail.com | default | ``` ## Additional considerations This section outlines additional considerations including alternative functions. ### Alternatives to `jsonb_array_elements` Use `jsonb_array_elements` when you need to maintain the `JSON` structure of the elements for further `JSON`-related operations or analysis and `jsonb_array_elements_text` if you need to work with the extracted elements as plain text for string operations, text analysis, or integration with text-based functions. If you want to create a comma-separated list of all skills for each developer in the `developers` table, `jsonb_array_elements_text` can be used along with `string_agg`. ```sql SELECT name, string_agg(skill, ',') AS skill_list FROM developers, jsonb_array_elements_text(skills) AS skill GROUP BY name; ``` This query returns the following values: ``` | name | skill_list | |---------|-----------------| | Alice | Java,Python,SQL | | Bob | C++,JavaScript | | Charlie | HTML,CSS,React | ``` Using `jsonb_array_elements` would result in an error because it returns `JSONB` values, which cannot be directly concatenated with the string operator. ```sql SELECT name, string_agg(skill, ',') AS skill_list FROM developers, jsonb_array_elements(skills) AS skill GROUP BY name; ``` **jsonb_path_query** `jsonb_path_query` uses `JSON` Path expressions for flexible navigation and filtering within `JSONB` structures and returns a `JSONB` array containing matching elements. It supports filtering within the path expression itself, enabling complex conditions and excels at navigating and extracting elements from nested arrays and objects. If your query involves navigating through multiple levels of nesting, complex filtering conditions, or updates to `JSONB` data, `jsonb_path_query` is often the preferred choice. Consider a simple example — to extract the first skill of each developer in the `developers` table: ```sql SELECT jsonb_path_query(skills, '$[0]') AS first_skill FROM developers; ``` This query returns the following values: ``` | first_skill | |-------------| | "Java" | | "C++" | | "HTML" | ``` ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-jsonb_each.txt # Postgres jsonb_each() function > The document explains the usage of the Postgres `jsonb_each()` function, detailing how it iterates over each key-value pair in a JSONB object, specifically for Neon users working with JSON data in PostgreSQL. ## Source - [Postgres jsonb_each() function HTML](https://neon.com/docs/functions/jsonb_each): The original HTML version of this documentation The `jsonb_each` function in Postgres is used to expand a `JSONB` object into a set of key-value pairs. It is useful when you need to iterate over a `JSONB` object's keys and values, such as when you're working with dynamic `JSONB` structures where the schema is not fixed. Another important use case is performing data transformations and analytics. ## Function signature ```sql jsonb_each(json JSON) -> SETOF record(key text, value json) ``` The function returns a set of rows, each containing a key and the corresponding value for each field in the input `JSONB` object. The key is of type `text`, while the value is of type `JSONB`. ## Example usage Consider a `JSONB` object representing a user's profile information. The `JSONB` data will have multiple attributes and might look like this: ```json { "username": "johndoe", "age": 30, "email": "johndoe@example.com" } ``` We can go over all the fields in the profile `JSONB` object using `jsonb_each`, and produce a row for each key-value pair. ```sql SELECT key, value FROM jsonb_each('{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}'); ``` This query returns the following results: ```text | key | value | |----------|-----------------------| | username | "johndoe" | | age | 30 | | email | "johndoe@example.com" | ``` ## Advanced examples ### Assign custom names to columns output by `jsonb_each` You can use `AS` to specify custom column names for the key and value columns. ```sql SELECT attr_name, attr_value FROM jsonb_each('{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}') AS user_data(attr_name, attr_value); ``` This query returns the following results: ```text | attr_name | attr_value | |-----------|-----------------------| | username | "johndoe" | | age | 30 | | email | "johndoe@example.com" | ``` ### Use `jsonb_each` output as a table or row source Since `jsonb_each` returns a set of rows, you can use it as a table source in a `FROM` clause. This lets us join the expanded `JSONB` data in the output with other tables. Here, we're joining each row in the `user_data` table with the output of `jsonb_each`: ```sql CREATE TABLE user_data ( id INT, profile JSON ); INSERT INTO user_data (id, profile) VALUES (123, '{"username": "johndoe", "age": 30, "email": "johndoe@example.com"}'), (140, '{"username": "mikesmith", "age": 40, "email": "mikesmith@example.com"}'); SELECT id, key, value FROM user_data, jsonb_each(user_data.profile); ``` This query returns the following results: ```text | id | key | value | |-----|----------|-------------------------| | 123 | username | "johndoe" | | 123 | age | 30 | | 123 | email | "johndoe@example.com" | | 140 | username | "mikesmith" | | 140 | age | 40 | | 140 | email | "mikesmith@example.com" | ``` ## Additional considerations ### Performance implications When working with large `JSONB` objects, `jsonb_each` may lead to performance overhead, as it expands each key-value pair into a separate row. ### Alternative functions - `jsonb_each_text` - Similar functionality to `jsonb_each` but returns the value as a text type instead of `JSONB`. - `jsonb_object_keys` - It returns only the set of keys in the `JSONB` object, without the values. - [json_each](https://neon.com/docs/functions/json_each) - It provides the same functionality as `jsonb_each`, but accepts `JSON` input instead of `JSONB`. ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-jsonb_extract_path.txt # Postgres jsonb_extract_path() function > The document details the usage of the `jsonb_extract_path()` function in PostgreSQL, explaining how it retrieves JSON values from a specified path within a `jsonb` column, specifically for Neon database users. ## Source - [Postgres jsonb_extract_path() function HTML](https://neon.com/docs/functions/jsonb_extract_path): The original HTML version of this documentation You can use the `jsonb_extract_path` function to extract the value at a specified path within a `JSONB` document. This approach is more performant compared to querying the entire `JSONB` payload and processing it on the application side. It is particularly useful when dealing with nested `JSONB` structures. ## Function signature ```sql jsonb_extract_path(from_json JSONB, VARIADIC path_elems TEXT[]) -> JSONB ``` ## Example usage To illustrate the `jsonb_extract_path` function in Postgres, let's consider a scenario where we have a table storing information about books. Each book has a `JSONB` column containing details such as `title`, `author`, and publication `year`. You can create the `book` table using the SQL statements shown below. **books** ```sql CREATE TABLE books ( id INT, info JSONB ); INSERT INTO books (id, info) VALUES (1, '{"title": "The Catcher in the Rye", "author": "J.D. Salinger", "year": 1951}'), (2, '{"title": "To Kill a Mockingbird", "author": "Harper Lee", "year": 1960}'), (3, '{"title": "1984", "author": "George Orwell", "year": 1949}'); ``` ```text | id | info | |----|------------------------------------------------------------------------------| | 1 | {"title": "The Catcher in the Rye", "author": "J.D. Salinger", "year": 1951} | | 2 | {"title": "To Kill a Mockingbird", "author": "Harper Lee", "year": 1960} | | 3 | {"title": "1984", "author": "George Orwell", "year": 1949} | ``` Now, let's use the `jsonb_extract_path` function to extract the `title` and `author` of each book: ```sql SELECT id, jsonb_extract_path(info, 'title') as title, jsonb_extract_path(info, 'author') as author FROM books; ``` This query returns the following values: ```text | id | title | author | |----|--------------------------|------------------| | 1 | "The Catcher in the Rye" | "J.D. Salinger" | | 2 | "To Kill a Mockingbird" | "Harper Lee" | | 3 | "1984" | "George Orwell" | ``` ## Advanced examples Consider a `products` table that stores information about the products in an e-commerce system. The table schema and data are outlined below. **products** ```sql CREATE TABLE products ( id INT, attributes JSONB ); INSERT INTO products (id, attributes) VALUES (1, '{"name": "Laptop", "specs": {"brand": "Dell", "RAM": "16GB", "storage": {"type": "SSD", "capacity": "512GB"}}, "tags": ["pc"]}'), (2, '{"name": "Smartphone", "specs": {"brand": "Google", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "256GB"}}, "tags": ["android", "pixel"]}'), (3, '{"name": "Smartphone", "specs": {"brand": "Apple", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "128GB"}}, "tags": ["ios", "iphone"]}'); ``` ```text | id | attributes | |--------|---------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | {"name": "Laptop", "specs": {"brand": "Dell", "RAM": "16GB", "storage": {"type": "SSD", "capacity": "512GB"}}, "tags": ["pc"]} | | 2 | {"name": "Smartphone", "specs": {"brand": "Google", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "256GB"}}, "tags": ["android", "pixel"]} | | 3 | {"name": "Smartphone", "specs": {"brand": "Apple", "RAM": "8GB", "storage": {"type": "UFS", "capacity": "128GB"}}, "tags": ["ios", "iphone"]} | ``` ### Extract value from nested JSONB object with `jsonb_extract_path` Let's use `jsonb_extract_path` to retrieve information about the storage type and capacity for each product, demonstrating how to extract values from a nested `JSONB` object. ```sql SELECT id, jsonb_extract_path(attributes, 'specs', 'storage', 'type') as storage_type, jsonb_extract_path(attributes, 'specs', 'storage', 'capacity') as storage_capacity FROM products; ``` This query returns the following values: ```text | id | storage_type | storage_capacity | |----|--------------|------------------| | 1 | "SSD" | "512GB" | | 2 | "UFS" | "256GB" | | 3 | "UFS" | "128GB" | ``` ### Extract values from JSON array with `jsonb_extract_path` Now, let's use `jsonb_extract_path` to extract information about the associated tags as well, demonstrating how to extract values from a `JSONB` array. ```sql SELECT id, jsonb_extract_path(attributes, 'specs', 'storage', 'type') as storage_type, jsonb_extract_path(attributes, 'specs', 'storage', 'capacity') as storage_capacity, jsonb_extract_path(attributes, 'tags', '0') as first_tag, jsonb_extract_path(attributes, 'tags', '1') as second_tag FROM products; ``` This query returns the following values: ```text | id | storage_type | storage_capacity | first_tag | second_tag | |----|--------------|------------------|-----------|------------| | 1 | "SSD" | "512GB" | "pc" | null | | 2 | "UFS" | "256GB" | "android" | "pixel" | | 3 | "UFS" | "128GB" | "ios" | "iphone" | ``` ### Joining data with values extracted using `jsonb_extract_path` Let's say you have two tables, `employees` and `departments`, and the `employees` table has a `JSONB` column named `details` that contains information about each employee's department. You want to join these tables based on the department information stored in the `JSONB` column. The table schemas and data used in this example are shown below. **departments** ```sql CREATE TABLE departments ( department_id SERIAL PRIMARY KEY, department_name VARCHAR(255) ); INSERT INTO departments (department_name) VALUES ('IT'), ('HR'), ('Marketing'); ``` ```text | department_id | department_name | |---------------|------------------| | 1 | IT | | 2 | HR | | 3 | Marketing | ``` **employees** ```sql CREATE TABLE employees ( employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(255), details JSONB ); INSERT INTO employees (employee_name, details) VALUES ('John Doe', '{"department": "IT"}'), ('Jane Smith', '{"department": "HR"}'), ('Bob Johnson', '{"department": "Marketing"}'); ``` ```text | employee_id | employee_name | details | |-------------|---------------|-----------------------------| | 1 | John Doe | {"department": "IT"} | | 2 | Jane Smith | {"department": "HR"} | | 3 | Bob Johnson | {"department": "Marketing"} | ``` You can use `JOIN` with `jsonb_extract_path` to retrieve the value to join on: ```sql SELECT employees.employee_name, departments.department_name FROM employees JOIN departments ON TRIM(BOTH '"' FROM jsonb_extract_path(employees.details, 'department')::TEXT) = departments.department_name; ``` This query returns the following values: ```test | employee_name | department_name | |---------------|------------------| | John Doe | IT | | Jane Smith | HR | | Bob Johnson | Marketing | ``` The `jsonb_extract_path` function extracts the value of the `department` key from the `JSONB` column in the `employees` table. The `JOIN` is then performed based on matching department names. ### Handling invalid path inputs to `jsonb_extract_path` `jsonb_extract_path` handles an invalid path by returning `NULL`, as in the following example: ```sql SELECT id, jsonb_extract_path(attributes, 'speks') as storage_type FROM products; ``` The query above, which specifies an invalid path (`'speks'` instead of `'specs'`), returns `NULL` as shown: ```text id | storage_type ----+-------------- 1 | 2 | 3 | ``` ## Additional considerations ### Performance and Indexing The `jsonb_extract_path` function performs well when extracting data from `JSONB` documents, especially compared to extracting data in application code. It allows performing the extraction directly in the database, avoiding transferring entire `JSONB` documents to the application. Indexing `JSONB` documents can also significantly improve `jsonb_extract_path` query performance when filtering data based on values extracted from `JSON`. ### Alternative functions - [jsonb_extract_path_text](https://neon.com/docs/functions/jsonb_extract_path_text) - The regular `jsonb_extract_path` function returns the extracted value as a `JSONB` object or array, preserving its `JSON` structure, whereas the alternative `jsonb_extract_path_text` function returns the extracted value as a plain text string, casting any `JSONB` objects or arrays to their string representations. Use the regular `jsonb_extract_path` function when you need to apply `JSONB`-specific functions or operators to the extracted value, requiring `JSONB` data types. The alternative `jsonb_extract_path_text` function is preferable if you need to work directly with the extracted value as a string, for text processing, concatenation, or comparison. - [json_extract_path](https://neon.com/docs/functions/json_extract_path) - The `jsonb_extract_path` function works with the `JSONB` data type, which offers a binary representation of `JSON` data, whereas `json_extract_path` takes a `JSON` value as an input and returns `JSON` too. The `JSONB` variant is typically more performant at query time, which is even more pronounced with larger `JSON` data payloads and frequent path extractions. ## Resources - [PostgreSQL documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-jsonb_extract_path_text.txt # Postgres jsonb_extract_path_text() Function > The document explains the usage of the `jsonb_extract_path_text()` function in Neon, detailing how it retrieves text from a JSONB object at a specified path. ## Source - [Postgres jsonb_extract_path_text() Function HTML](https://neon.com/docs/functions/jsonb_extract_path_text): The original HTML version of this documentation The `jsonb_extract_path_text` function is designed to simplify extracting text from `JSONB` data in Postgres. This function is similar to `jsonb_extract_path` — it also produces the value at the specified path from a `JSONB` object but casts it to plain text before returning. This makes it more straightforward for text manipulation and comparison operations. ## Function signature ```sql jsonb_extract_path_text(from_json JSONB, VARIADIC path_elems text[]) -> TEXT ``` The function accepts a `JSONB` object and a variadic list of elements that specify the path to the desired value. ## Example usage Let's consider a `users` table with a `JSONB` column named `profile` containing various user details. Here's how we can create the table and insert some sample data: ```sql CREATE TABLE users ( id INT, profile JSONB ); INSERT INTO users (id, profile) VALUES (1, '{"name": "Alice", "contact": {"email": "alice@example.com", "phone": "1234567890"}, "hobbies": ["reading", "cycling", "hiking"]}'), (2, '{"name": "Bob", "contact": {"email": "bob@example.com", "phone": "0987654321"}, "hobbies": ["gaming", "cooking"]}'); ``` To extract and view the email addresses of all users, we can run the following query: ```sql SELECT id, jsonb_extract_path_text(profile, 'contact', 'email') as email FROM users; ``` This query returns the following: ```text | id | email | |----|--------------------| | 1 | alice@example.com | | 2 | bob@example.com | ``` ## Advanced examples ### Use output of `jsonb_extract_path_text` in a `JOIN` clause Let's say we have another table, `hobbies`, that includes additional information such as difficulty level and the average cost to practice each hobby. We can create the `hobbies` table with some sample data with the following statements: ```sql CREATE TABLE hobbies ( hobby_id SERIAL PRIMARY KEY, hobby_name VARCHAR(255), difficulty_level VARCHAR(50), average_cost VARCHAR(50) ); INSERT INTO hobbies (hobby_name, difficulty_level, average_cost) VALUES ('Reading', 'Easy', 'Low'), ('Cycling', 'Moderate', 'Medium'), ('Gaming', 'Variable', 'High'), ('Cooking', 'Variable', 'Low'); ``` The `users` table we created previously has a `JSONB` column named `profile` that contains information about each user's preferred hobbies. A fun exercise could be to find if a user has any hobbies that are easy to get started with. Then we can recommend they engage with it more often. To fetch this list, we can run the query below. ```sql SELECT jsonb_extract_path_text(u.profile, 'name') as user_name, h.hobby_name FROM users u JOIN hobbies h ON jsonb_extract_path_text(u.profile, 'hobbies') LIKE '%' || lower(h.hobby_name) || '%' WHERE h.difficulty_level = 'Easy'; ``` We use `jsonb_extract_path_text` to extract the list of hobbies for each user, and then check if the name of an easy hobby is present in the list. This query returns the following: ```text | user_name | hobby_name | |-----------|------------| | Alice | Reading | ``` ### Extract values from JSON array with `jsonb_extract_path_text` `jsonb_extract_path_text` can also be used to extract values from `JSONB` arrays. For instance, to extract the first and second hobbies for everyone, we can run the following query: ```sql SELECT jsonb_extract_path_text(profile, 'name') as name, jsonb_extract_path_text(profile, 'hobbies', '0') as first_hobby, jsonb_extract_path_text(profile, 'hobbies', '1') as second_hobby FROM users; ``` This query returns the following: ```text | name | first_hobby | second_hobby | |-------|-------------|--------------| | Alice | reading | cycling | | Bob | gaming | cooking | ``` ## Additional considerations ### Performance and indexing Performance considerations for `jsonb_extract_path_text` are similar to those for `json_extract_path`. It is efficient for extracting data but can be impacted by large `JSONB` objects or complex queries. Indexing the `JSONB` column can improve performance in some cases. ### Alternative functions - [jsonb_extract_path](https://neon.com/docs/functions/jsonb_extract_path) - This is a similar function that can extract data from a `JSONB` object at the specified path. The difference is that it returns a `JSONB` object, while `jsonb_extract_path_text` always returns text. The right function to use depends on what you want to use the output data for. - [json_extract_path_text](https://neon.com/docs/functions/json_extract_path_text) - This is a similar function that can extract data from a `JSON` object, (instead of `JSONB`) at the specified path. ## Resources - [PostgreSQL Documentation: JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) - [PostgreSQL Documentation: JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) --- # Source: https://neon.com/llms/functions-jsonb_object.txt # Postgres jsonb_object() function > The document details the usage of the Postgres `jsonb_object()` function within Neon, explaining how to construct JSON objects from key-value pairs in a database context. ## Source - [Postgres jsonb_object() function HTML](https://neon.com/docs/functions/jsonb_object): The original HTML version of this documentation The `jsonb_object` function in Postgres is used to create a `JSONB` object from a set of key-value pairs. It is particularly useful when you need to generate `JSONB` data dynamically from existing table data or input parameters. ## Function signature ```sql jsonb_object(keys TEXT[], values TEXT[]) -> JSONB -- or -- jsonb_object(keys_values TEXT[]) -> JSONB ``` This function takes two text arrays as input: one for keys and one for values. Both arrays must have the same number of elements, as each key is paired with the corresponding value to construct the `JSONB` object. Alternatively, you can pass a single text array containing both keys and values. In this case, alternate elements in the array are treated as keys and values, respectively. ## Example usage Consider a scenario where you run a library and have a table that tracks details for each book. The table with some sample data can be set up as shown: ```sql -- Test database table for a bookstore inventory CREATE TABLE book_inventory ( book_id INT, title TEXT, author TEXT, price NUMERIC, genre TEXT ); -- Inserting some test data into `book_inventory` INSERT INTO book_inventory VALUES (101, 'The Great Gatsby', 'F. Scott Fitzgerald', 18.99, 'Classic'), (102, 'Invisible Man', 'Ralph Ellison', 15.99, 'Novel'); ``` When querying this dataset, the frontend client might want to present the data in a different way. Say you want the catalog information just as the list of book names while combining the rest of the fields into a single `metadata` attribute. You can do so as shown here: ```sql SELECT book_id, title, jsonb_object( ARRAY['author', 'genre'], ARRAY[author, genre] ) AS metadata FROM book_inventory; ``` This query returns the following result: ```text | book_id | title | metadata | |---------|------------------|--------------------------------------------| | 101 | The Great Gatsby | {"author" : "F. Scott Fitzgerald", | | | | "genre" : "Classic"} | | 102 | Invisible Man | {"author" : "Ralph Ellison", | | | | "genre" : "Novel"} | ``` ## Advanced examples ### Creating nested JSON objects with `jsonb_object` You could use `jsonb_object` to create nested `JSONB` objects for representing more complex data. However, since `jsonb_object` only expects text values for each key, we will need to combine it with other `JSONB` functions like `jsonb_build_object`. For example: ```sql SELECT jsonb_build_object( 'title', title, 'author', jsonb_object(ARRAY['name', 'genre'], ARRAY[author, genre]) ) AS book_info FROM book_inventory; ``` This query returns the following result: ```text | book_info | |--------------------------------------------------------------------------------------------------| | {"title" : "The Great Gatsby", "author" : {"name" : "F. Scott Fitzgerald", "genre" : "Classic"}} | | {"title" : "Invisible Man", "author" : {"name" : "Ralph Ellison", "genre" : "Novel"}} | ``` ## Additional considerations ### Gotchas - Ensure both keys and values arrays have the same number of elements. Mismatched arrays will result in an error. Or, if passing in a single key-value array, ensure that the array has an even number of elements. - Be aware of data type conversions. Since `jsonb_object` expects text arrays, you may need to explicitly cast non-text data types to text. ### Alternative options - [json_object](https://neon.com/docs/functions/json_object) - Same functionality as `jsonb_object`, but returns a `JSON` object instead of `JSONB`. - [to_jsonb](https://www.postgresql.org/docs/current/functions-json.html) - It can be used to create a `JSONB` object from a table row (or a row of a composite type) without needing to specify keys and values explicitly. Although, it is less flexible than `jsonb_object` since all fields in the row are included in the `JSONB` object. - [jsonb_build_object](https://www.postgresql.org/docs/current/functions-json.html) - Similar to `jsonb_object`, but allows for more flexibility in constructing the `JSONB` object, as it can take a variable number of arguments in the form of key-value pairs. - [jsonb_object_agg](https://www.postgresql.org/docs/current/functions-json.html) - It is used to aggregate the key-value pairs from multiple rows into a single `JSONB` object. In contrast, `jsonb_object` outputs a `JSONB` object for each row. ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-jsonb_populate_record.txt # Postgres jsonb_populate_record() function > The document explains the usage of the Postgres `jsonb_populate_record()` function, detailing how it converts JSONB data into a Postgres record type, specifically for Neon database users. ## Source - [Postgres jsonb_populate_record() function HTML](https://neon.com/docs/functions/jsonb_populate_record): The original HTML version of this documentation The `jsonb_populate_record` function is used to populate a record type with values from a `JSONB` object. It is useful for parsing `JSONB` data received from external sources, particularly when merging it into an existing record. ## Function signature ```sql jsonb_populate_record(base_record ANYELEMENT, json JSONB) -> ANYELEMENT ``` This function takes two arguments: a base record of a row type (which can even be a `NULL` record) and a `JSONB` object. It returns the record updated with the `JSONB` values. ## Example usage Consider a database table that tracks employee information. When you receive employee information as `JSONB` records, you can use `jsonb_populate_record` to ingest the data into the table. Here we create the `employees` table with some sample data. ```sql CREATE TABLE employees ( id INT, name TEXT, department TEXT, salary NUMERIC ); ``` To illustrate, we start with a `NULL` record and cast the input `JSONB` payload to the `employees` record type. ```sql INSERT INTO employees SELECT * FROM jsonb_populate_record( NULL::employees, '{"id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000"}' ) RETURNING *; ``` This query returns the following result: ```text | id | name | department | salary | |----|----------|-------------|--------| | 123| John Doe | Engineering | 75000 | ``` ## Advanced examples ### Handling partial data with `jsonb_populate_record` For data points where the `JSONB` objects have missing keys, `jsonb_populate_record` can still cast them into legible records. Say we receive records for a bunch of employees who are known to be in Sales, but the `department` field is missing from the `JSONB` payload. We can use `jsonb_populate_record` with the default value specified for a field while the other fields are populated from the `JSONB` payload, as in this example: ```sql INSERT INTO employees SELECT * FROM jsonb_populate_record( (1, 'ABC', 'Sales', 0)::employees, '{"id": "124", "name": "Jane Smith", "salary": "68000"}' ) RETURNING *; ``` This query returns the following: ```text | id | name | department | salary | |----|------------|------------|--------| | 124| Jane Smith | Sales | 68000 | ``` ### Using `jsonb_populate_record` with custom types The base record doesn't need to have the type of a table row and can be a [custom Postgres type](https://www.postgresql.org/docs/current/sql-createtype.html) too. For example, here we first define a custom type `address` and use `jsonb_populate_record` to cast a `JSONB` object to it: ```sql CREATE TYPE address AS ( street TEXT, city TEXT, zip TEXT ); SELECT * FROM jsonb_populate_record( NULL::address, '{"street": "123 Main St", "city": "San Francisco", "zip": "94105"}' ); ``` This query returns the following result: ```text | street | city | zip | |------------|---------------|-------| | 123 Main St| San Francisco | 94105 | ``` ## Additional considerations ### Alternative options - [jsonb_to_record](https://neon.com/docs/functions/jsonb_to_record) - It can be used similarly, with a couple differences. `jsonb_populate_record` can be used with a base record of a pre-defined type, whereas `jsonb_to_record` needs the record type defined inline in the `AS` clause. Further, `jsonb_populate_record` can specify default values for missing fields through the base record, whereas `jsonb_to_record` must assign them NULL values. - `jsonb_populate_recordset` - It can be used similarly to parse `JSONB`, the difference being that it returns a set of records instead of a single record. For example, if you have an array of `JSONB` objects, you can use `jsonb_populate_recordset` to convert each object into a new row. - [json_populate_record](https://neon.com/docs/functions/json_populate_record) - It has the same functionality to `jsonb_populate_record`, but accepts `JSON` input instead of `JSONB`. ## Resources - [Postgres documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-jsonb_to_record.txt # Postgres jsonb_to_record() function > The document explains the usage of the `jsonb_to_record()` function in Neon, detailing how it converts JSONB data into a set of columns, facilitating structured data manipulation within PostgreSQL databases. ## Source - [Postgres jsonb_to_record() function HTML](https://neon.com/docs/functions/jsonb_to_record): The original HTML version of this documentation You can use the `jsonb_to_record` function to convert a top-level `JSONB` object into a row, with the type specified by the `AS` clause. This function is useful when you need to parse `JSONB` data received from external sources, such as APIs or file uploads, and store it in a structured format. By using `jsonb_to_record`, you can easily extract values from `JSONB` and map them to the corresponding columns in your database table. ## Function signature ```sql jsonb_to_record(json JSONB) AS (column_name column_type [, ...]) ``` The function's definition includes a column definition list, where you specify the name and data type of each column in the resulting record. ## Example usage Consider a scenario in which you have `JSONB` data representing employee information, and you want to ingest it for easier processing later. The `JSONB` data looks like this: ```json { "id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000" } ``` The table you want to insert data into is defined as follows: ```sql CREATE TABLE employees ( id INT, name TEXT, department TEXT, salary NUMERIC ); ``` Using `jsonb_to_record`, you can insert the input data into the `employees` table as shown: ```sql INSERT INTO employees SELECT * FROM jsonb_to_record('{"id": "123", "name": "John Doe", "department": "Engineering", "salary": "75000"}') AS x(id INT, name TEXT, department TEXT, salary NUMERIC); ``` Note that the string representation of the JSON object didn't need to be explicitly cast to `JSONB`. Postgres automatically casts it to `JSONB` when the function is called. To verify the data was inserted, you can run the following query: ```sql SELECT * FROM employees; ``` This query returns the following result: ```text | id | name | department | salary | |----|----------|--------------|--------| | 123| John Doe | Engineering | 75000 | ``` ## Advanced examples This section provides advanced `jsonb_to_record` examples. ### Handling partial data with `jsonb_to_record` For datapoints where the `JSONB` objects have missing keys, `jsonb_to_record` can still cast them into records, producing `NULL` values for the unmatched columns. For example: ```sql INSERT INTO employees SELECT * FROM jsonb_to_record('{ "id": "124", "name": "Jane Smith" }') AS x(id INT, name TEXT, department TEXT, salary NUMERIC) RETURNING *; ``` This query returns the following result: ``` | id | name | department | salary | |----|------------|--------------|--------| | 124| Jane Smith | | | ``` ### Handling nested data with `jsonb_to_record` `jsonb_to_record` can also be used to handle nested `JSONB` input data (i.e., keys with values that are `JSONB` objects themselves). You need to first define a [custom Postgres type](https://www.postgresql.org/docs/current/sql-createtype.html). The newly created type can then be used in the column definition list along with the other columns. In the following example, we handle the `address` field by creating an `ADDRESS_TYPE` type first. ```sql CREATE TYPE ADDRESS_TYPE AS ( street TEXT, city TEXT ); SELECT * FROM jsonb_to_record('{ "id": "125", "name": "Emily Clark", "department": "Marketing", "salary": "68000", "address": { "street": "123 Elm St", "city": "Springfield" } }') AS x(id INT, name TEXT, department TEXT, salary NUMERIC, address ADDRESS_TYPE); ``` This query returns the following result: ```text | id | name | department | salary | address | |----|-------------|------------|--------|-----------------------------| | 1 | Emily Clark | Marketing | 68000 | ("123 Elm St", Springfield) | ``` ### Alternative functions - [jsonb_populate_record](https://neon.com/docs/functions/jsonb_populate_record): This function can also be used to create records using values from a `JSONB` object. The difference is that `jsonb_populate_record` requires the record type to be defined beforehand, while `jsonb_to_record` needs the type definition inline. - [jsonb_to_recordset](https://www.postgresql.org/docs/current/functions-json.html): This function can be used similarly to parse `JSONB`, the difference being that it returns a set of records instead of a single record. For example, if you have an array of `JSONB` objects, you can use `jsonb_to_recordset` to convert each object into a new row. - [json_to_record](https://neon.com/docs/functions/json_to_record): This function provides the same functionality as `json_to_record`, but accepts `JSON` input instead of `JSONB`. In cases where the input payload type isn't exactly specified, either of the two functions can be used. For example, take this `json_to_record` query: ```sql SELECT * FROM json_to_record('{"id": "123", "name": "John Doe", "department": "Engineering"}') AS x(id INT, name TEXT, department TEXT); ``` It works just as well as this `JSONB` variant (below) since Postgres casts the literal `JSON` object to `JSON` or `JSONB` depending on the context. ```sql SELECT * FROM jsonb_to_record('{"id": "123", "name": "Sally", "department": "Engineering"}') AS x(id INT, name TEXT, department TEXT); ``` ## Resources - [PostgreSQL documentation: JSON functions](https://www.postgresql.org/docs/current/functions-json.html) --- # Source: https://neon.com/llms/functions-lower.txt # Postgres lower() function > The document explains the usage of the Postgres `lower()` function in Neon, detailing how it converts text to lowercase within the database environment. ## Source - [Postgres lower() function HTML](https://neon.com/docs/functions/lower): The original HTML version of this documentation The `lower()` function in Postgres is used to convert a string to lowercase. It's commonly used for search functionality where you want case-insensitivity or when you need to standardize user input for storage or comparison purposes. For example, `lower()` can be used to normalize email addresses or usernames in a user management system. ## Function signature The `lower()` function has a simple signature: ```sql lower(string text) -> text ``` - `string`: The input string to be converted to lowercase. ## Example usage Consider a table `products` with a `product_name` column that contains product names with inconsistent capitalization. We can use `lower()` to standardize these names for comparison or display purposes. ```sql WITH products AS ( SELECT * FROM ( VALUES ('LAPTOP Pro X'), ('SmartPhone Y'), ('Tablet ULTRA 2') ) AS t(product_name) ) SELECT lower(product_name) AS standardized_name FROM products; ``` This query converts all product names to lowercase, making them consistent regardless of their original capitalization. Note that non-alphabetic characters are left unchanged. ```text standardized_name ------------------- laptop pro x smartphone y tablet ultra 2 (3 rows) ``` ## Advanced examples ### Case-insensitive search You can use `lower()` in a `WHERE` clause to perform case-insensitive searches: ```sql WITH customers AS ( SELECT 'John Doe' AS name, 'JOHN.DOE@EXAMPLE.COM' AS email UNION ALL SELECT 'Jane Smith' AS name, 'jane.smith@example.com' AS email UNION ALL SELECT 'Bob Johnson' AS name, 'Bob.Johnson@Example.com' AS email ) SELECT name, email FROM customers WHERE lower(email) LIKE lower('%John.%'); ``` This query will find the customer regardless of how the email address was capitalized in the database or search term. ```text name | email ----------+---------------------- John Doe | JOHN.DOE@EXAMPLE.COM (1 row) ``` ### Combining with other string functions `lower()` can be combined with other string functions for more complex operations: ```sql WITH user_data AS ( SELECT 'JOHN_DOE_123' AS username UNION ALL SELECT 'JANE_SMITH_456' AS username UNION ALL SELECT 'BOB_JOHNSON_789' AS username ) SELECT lower(split_part(username, '_', 1)) AS first_name, lower(split_part(username, '_', 2)) AS last_name, split_part(username, '_', 3) AS user_id FROM user_data; ``` This query splits the username into parts, converts the name parts to lowercase, and keeps the user ID as-is. ### Using `lower()` to create indexes Postgres supports creating a _functional index_ based on the result of a function applied to a column. To optimize case-insensitive searches, we can create an index using the `lower()` function: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE INDEX idx_users_name_lower ON users (lower(name)); ``` This index will improve the performance of queries that use `lower(name)` to filter data. ### Normalizing data for uniqueness constraints When you want to enforce uniqueness regardless of case, you can use `lower()` to create a unique index on the column. ```sql CREATE TABLE organizations ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE UNIQUE INDEX idx_organizations_name_lower ON organizations (lower(name)); INSERT INTO organizations (name) VALUES ('Acme Corp'); INSERT INTO organizations (name) VALUES ('Bailey Inc.'); ``` Trying to insert a duplicate organization name with different capitalization will raise an error: ```sql INSERT INTO organizations (name) VALUES ('ACME CORP'); -- ERROR: duplicate key value violates unique constraint "idx_organizations_name_lower" -- DETAIL: Key (lower(name))=(acme corp) already exists. ``` ## Additional considerations ### Performance implications While `lower()` is generally fast, using it in `WHERE` clauses or `JOIN` conditions on large tables can impact performance, as it prevents the use of standard indexes directly. In such cases, consider using functional indexes as shown in the earlier example. ### Locale considerations The `lower()` function uses the database's locale setting for its case conversion rules. If your application needs to handle multiple languages, you may need to consider using the `lower()` function with specific collations or implementing custom case-folding logic. ### Alternative functions - `upper()` - Converts a string to uppercase. - `initcap()` - Converts the first letter of each word to uppercase and the rest to lowercase. ## Resources - [PostgreSQL documentation: String functions and operators](https://www.postgresql.org/docs/current/functions-string.html) - [PostgreSQL documentation: Indexes on expressions](https://www.postgresql.org/docs/current/indexes-expressional.html) --- # Source: https://neon.com/llms/functions-math-abs.txt # Postgres abs() function > The document explains the usage of the Postgres `abs()` function within Neon, detailing how it returns the absolute value of a given numeric expression. ## Source - [Postgres abs() function HTML](https://neon.com/docs/functions/math-abs): The original HTML version of this documentation The Postgres `abs()` function is used to compute the absolute value of a number. The absolute value is the non-negative value of a number without regard to its sign. It's useful in multiple scenarios when working with numbers, such as calculating distances, comparing magnitudes regardless of direction, or ensuring non-negative values in financial calculations. ## Function signature The `abs()` function has a simple form: ```sql abs(number) -> number ``` - `number`: The input value for which you want to calculate the absolute value. It can be of any numeric data type - integer, floating-point, or decimal. ## Example usage Consider a table `transactions` with an `amount` column that contains both positive (deposits) and negative (withdrawals) values. We can use `abs()` to order the transactions by their magnitude. ```sql WITH transactions(id, amount) AS ( VALUES (1, 100.50), (2, -75.25), (3, 200.00), (4, -150.75) ) SELECT id, amount FROM transactions ORDER BY abs(amount) DESC; ``` This query retrieves the transaction IDs and amounts, ordering them by the absolute value of the amount, in descending order. ```text id | amount ----+--------- 3 | 200.00 4 | -150.75 1 | 100.50 2 | -75.25 (4 rows) ``` ## Other examples ### Using abs() for distance calculations The `abs()` function is also frequently used for distance calculations, where the direction is not relevant. Suppose we have a table of geographical coordinates and we want to find points within a certain range of a reference point. ```sql WITH locations(name, latitude, longitude) AS ( VALUES ('Point A', 40.7128, -74.0060), ('Point B', 40.7484, -73.9857), ('Point C', 41.6892, -74.0445), ('Reference', 40.7300, -73.9950) ) SELECT name, abs(latitude - 40.7300) AS lat_diff, abs(longitude - (-73.9950)) AS long_diff FROM locations WHERE abs(latitude - 40.7300) <= 0.05 AND abs(longitude - (-73.9950)) <= 0.05; ``` This query finds all points within 0.05 degrees (approximately 5.5 km) of the reference point (40.7300, -73.9950) in both latitude and longitude. ``` name | lat_diff | long_diff -----------+----------+----------- Point A | 0.0172 | 0.0110 Point B | 0.0184 | 0.0093 Reference | 0.0000 | 0.0000 (4 rows) ``` ### Combining abs() with other functions We can combine `abs()` with other functions for more complex calculations. For example, to measure the percentage discrepancy between forecasted and actual sales, we can use `abs()` to calculate the size of the difference and then divide it by the forecasted value. ```sql WITH sales_data(product, forecast, actual) AS ( VALUES ('Product A', 1000, 1100), ('Product B', 500, 450), ('Product C', 750, 725), ('Product D', 300, 400) ) SELECT product, forecast, actual, round(abs(actual - forecast) / forecast::numeric * 100, 2) AS percentage_difference FROM sales_data ORDER BY percentage_difference DESC; ``` This query orders the products by the percentage difference between the forecasted and actual sales. ``` product | forecast | actual | percentage_difference -----------+----------+--------+----------------------- Product D | 300 | 400 | 33.33 Product A | 1000 | 1100 | 10.00 Product B | 500 | 450 | 10.00 Product C | 750 | 725 | 3.33 (4 rows) ``` ## Additional considerations ### Performance implications The `abs()` function is pretty quick, as it's a simple mathematical operation. However, if you frequently filter or join a large dataset based on absolute values, consider creating a functional index using `abs()` to speed up queries. ### Alternative functions and operators - The `@` operator: Postgres provides the `@` operator as an alternative to the `abs()` function. It performs the same operation (calculating the absolute value) and can be used interchangeably with `abs()`. For example, `@ -5` is equivalent to `abs(-5)`. ## Resources - [PostgreSQL documentation: Mathematical Functions and Operators](https://www.postgresql.org/docs/current/functions-math.html) - [PostgreSQL documentation: Numeric Types](https://www.postgresql.org/docs/current/datatype-numeric.html) --- # Source: https://neon.com/llms/functions-math-random.txt # Postgres random() function > The document explains the usage of the Postgres `random()` function within Neon, detailing its syntax and examples for generating random numbers in SQL queries. ## Source - [Postgres random() function HTML](https://neon.com/docs/functions/math-random): The original HTML version of this documentation The Postgres `random()` function generates random floating point values between 0.0 and 1.0. Starting with Postgres 17, it also supports generating random integers or decimals within a specified range using `random(min, max)` syntax. It's particularly useful for creating some sample data, usage in simulations, or introducing randomness in queries for applications like statistical sampling and testing algorithms. ## Function signatures The `random()` function has the following signatures: ```sql random() -> double precision random(min integer, max integer) -> integer -- Added in Postgres 17 random(min bigint, max bigint) -> bigint -- Added in Postgres 17 random(min numeric, max numeric) -> numeric -- Added in Postgres 17 ``` The first form returns a uniformly distributed random value between 0.0 (inclusive) and 1.0 (exclusive). Starting from Postgres 17, the function also accepts range parameters: - For integer types, it returns a random integer between min and max (inclusive) - For numeric types, it returns a random decimal number between min and max (inclusive). The result will have the same number of decimal places as the input parameter with the highest precision. ## Example usage ```sql SELECT random(); -- Generates a random floating point number between 0.0 and 1.0 -- 0.555470146570157 SELECT random(1, 6); -- Generates a random integer between 1 and 6 -- 4 SELECT random(1.5, 3.54); -- Generates a random decimal number between 1.5 and 3.54 with 2 decimal places precision -- 2.66 ``` ### Basic random number generation Let's create a table of simulated sensor readings with random values: ```sql CREATE TABLE sensor_readings ( id SERIAL PRIMARY KEY, sensor_name TEXT, temperature NUMERIC(5,2), humidity NUMERIC(5,2), timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); INSERT INTO sensor_readings (sensor_name, temperature, humidity) SELECT 'Sensor-' || generate_series, 20 + (random() * 15)::NUMERIC(5,2), -- Temperature between 20°C and 35°C 40 + (random() * 40)::NUMERIC(5,2) -- Humidity between 40% and 80% FROM generate_series(1, 5); SELECT * FROM sensor_readings; ``` The `generate_series()` function is used to generate a series of integers from 1 to 5, which is then used to create the sensor names. Then, `random()` is used to generate random temperature and humidity values within specific ranges. ```text id | sensor_name | temperature | humidity | timestamp ----+-------------+-------------+----------+---------------------------- 1 | Sensor-1 | 26.16 | 76.85 | 2024-06-23 10:34:03.627556 2 | Sensor-2 | 31.49 | 44.88 | 2024-06-23 10:34:03.627556 3 | Sensor-3 | 30.62 | 49.94 | 2024-06-23 10:34:03.627556 4 | Sensor-4 | 23.32 | 79.20 | 2024-06-23 10:34:03.627556 5 | Sensor-5 | 34.33 | 50.39 | 2024-06-23 10:34:03.627556 (5 rows) ``` ### Random integer within a range Let's simulate a dice game where each player rolls two dice, and we calculate the total: ```sql CREATE TABLE dice_rolls ( roll_id SERIAL PRIMARY KEY, player_name TEXT, die1 INTEGER, die2 INTEGER, total INTEGER ); INSERT INTO dice_rolls (player_name, die1, die2, total) SELECT 'Player-' || generate_series, random(1, 6), -- Random integer between 1 and 6 random(1, 6), -- Random integer between 1 and 6 0 -- We'll update this next FROM generate_series(1, 5); UPDATE dice_rolls SET total = die1 + die2; SELECT * FROM dice_rolls; ``` This simulates 5 players each rolling two dice, with random values between 1 and 6 for each die. Notice how we can now use the simpler `random(1, 6)` syntax instead of the more complex `1 + floor(random() * 6)::INTEGER` typically used in earlier versions of Postgres. ```text roll_id | player_name | die1 | die2 | total ---------+-------------+------+------+------- 1 | Player-1 | 6 | 1 | 7 2 | Player-2 | 1 | 3 | 4 3 | Player-3 | 5 | 1 | 6 4 | Player-4 | 6 | 2 | 8 5 | Player-5 | 5 | 6 | 11 (5 rows) ``` ## Other examples ### Using random() for sampling Suppose we have a large table of customer data and want to select a random sample for a survey: ```sql CREATE TABLE customers ( id SERIAL PRIMARY KEY, name TEXT, email TEXT ); -- Populate the table with sample data INSERT INTO customers (name, email) SELECT 'Customer-' || generate_series, 'customer' || generate_series || '@example.com' FROM generate_series(1, 1000); -- Select a random 1% sample SELECT * FROM customers WHERE random() < 0.01; ``` This query selects approximately 1% of the customers randomly by filtering for rows where `random()` is less than 0.01. ```text id | name | email -----+--------------+------------------------- 18 | Customer-18 | customer18@example.com 349 | Customer-349 | customer349@example.com 405 | Customer-405 | customer405@example.com 519 | Customer-519 | customer519@example.com 712 | Customer-712 | customer712@example.com 791 | Customer-791 | customer791@example.com 855 | Customer-855 | customer855@example.com 933 | Customer-933 | customer933@example.com 970 | Customer-970 | customer970@example.com (9 rows) ``` ### Combining random() with other functions You can use `random()` in combination with other functions to generate more complex random data. For example, let's create a table of random events with timestamps within the last 24 hours: ```sql CREATE TABLE random_events ( id SERIAL PRIMARY KEY, event_type TEXT, severity INTEGER, timestamp TIMESTAMP ); INSERT INTO random_events (event_type, severity, timestamp) SELECT (ARRAY['Error', 'Warning', 'Info'])[random(1, 3)], random(1, 5), NOW() - (random() * INTERVAL '24 hours') FROM generate_series(1, 100); SELECT * FROM random_events ORDER BY timestamp DESC LIMIT 4; ``` This creates 100 random events with different types, severities, and timestamps within the last 24 hours. ```text id | event_type | severity | timestamp ----+------------+----------+---------------------------- 10 | Error | 1 | 2024-12-04 09:44:39.651498 47 | Info | 1 | 2024-12-04 09:41:50.372958 88 | Info | 3 | 2024-12-04 09:40:21.689072 74 | Warning | 2 | 2024-12-04 09:05:22.546381 (4 rows) ``` ## Additional considerations ### Seed for reproducibility The Postgres `random()` function uses a seed that is initialized at the start of each database session. If you need reproducible random numbers across sessions, you can set the seed manually using the `setseed()` function: ```sql SELECT setseed(0.3); SELECT random(); ``` This will produce the same sequence of random numbers in any session where you set the same seed. The `setseed()` function takes a value between 0 and 1 as its argument. ### Performance implications The `random()` function is generally fast, but excessive use in large datasets or complex queries can impact performance. For high-performance requirements, consider generating random values in application code or using materialized views with pre-generated random data. ### Alternative functions - `gen_random_uuid()`: Generates a random UUID, useful when you need unique identifiers. ## Resources - [PostgreSQL documentation: Mathematical Functions and Operators](https://www.postgresql.org/docs/current/functions-math.html) - [PostgreSQL documentation: Random Functions](https://www.postgresql.org/docs/current/functions-math.html#FUNCTIONS-MATH-RANDOM-TABLE) --- # Source: https://neon.com/llms/functions-math-round.txt # Postgres round() function > The document explains the usage and syntax of the Postgres `round()` function in Neon, detailing how it rounds numeric values to a specified number of decimal places. ## Source - [Postgres round() function HTML](https://neon.com/docs/functions/math-round): The original HTML version of this documentation The Postgres `round()` function rounds numeric values to a specified number of decimal places or the nearest integer. It can help maintain consistency in numerical data, simplify complex decimal numbers, and adjust the precision of calculations to meet specific requirements. It's particularly useful in financial calculations, data analysis, and for presenting numerical data in a more readable format. ## Function signature The `round()` function has a simple form: ```sql round(number [, decimal_places]) -> number ``` - `number`: The input value to be rounded. It can be of any numeric data type — integer, floating-point, or decimal. - `decimal_places`: An optional integer that specifies the number of decimal places to round to. If omitted, the input number is rounded to the nearest integer. ## Example usage Let's consider a table `product_sales` that tracks sales data for various products. We'll use the `round()` function to adjust the precision of our sales figures. ```sql WITH product_sales(product_id, sales_amount) AS ( VALUES (1, 1234.5678), (2, 2345.6789), (3, 3456.7890), (4, 4567.8901) ) SELECT product_id, sales_amount, round(sales_amount) AS rounded_to_integer, round(sales_amount, 2) AS rounded_to_cents FROM product_sales; ``` This query demonstrates using the `round()` function to round sales amounts to the nearest integer and to two decimal places (cents). ```text product_id | sales_amount | rounded_to_integer | rounded_to_cents ------------+--------------+--------------------+------------------ 1 | 1234.5678 | 1235 | 1234.57 2 | 2345.6789 | 2346 | 2345.68 3 | 3456.7890 | 3457 | 3456.79 4 | 4567.8901 | 4568 | 4567.89 (4 rows) ``` ## Other examples ### Using round() to calculate accurate percentages The `round()` function is often used when calculating and displaying percentages. For example, consider a table with sales data for different products. Let's calculate the percentage of total sales contributed by each product. ```sql WITH product_sales(product_id, sales_amount) AS ( VALUES (1, 1234.56), (2, 2345.67), (3, 3456.78), (4, 4567.89) ) SELECT product_id, sales_amount, round( (sales_amount / SUM(sales_amount) OVER ()) * 100, 2 ) AS percentage_of_total FROM product_sales ORDER BY percentage_of_total DESC; ``` This query calculates each product's contribution to total sales and rounds the percentage to two decimal places. This avoids displaying overly precise percentages that can be misleading. ```text product_id | sales_amount | percentage_of_total ------------+--------------+--------------------- 4 | 4567.89 | 39.36 3 | 3456.78 | 29.79 2 | 2345.67 | 20.21 1 | 1234.56 | 10.64 (4 rows) ``` ### Combining round() with other functions We can combine `round()` with other functions for more complex calculations. For example, let's calculate the average order value and round it to the nearest dollar and the nearest cents: ```sql WITH orders(order_id, total_amount) AS ( VALUES (1, 123.45), (2, 234.56), (3, 345.67), (4, 456.78), (5, 567.89) ) SELECT round(AVG(total_amount)) AS avg_order_value_rounded, round(AVG(total_amount), 2) AS avg_order_value_cents FROM orders; ``` ```text avg_order_value_rounded | avg_order_value_cents -------------------------+----------------------- 346 | 345.67 ``` ## Additional considerations ### Rounding behavior Postgres `round()` function uses the half-round-up method for tie-breaking. This means that when the input is exactly halfway between two numbers, it rounds up to the higher number. For example: ```sql SELECT round(2.65, 1), round(2.75, 1); ``` This query rounds both 2.65 and 2.75 to the next higher number with one decimal place: ```text round | round -------+------- 2.7 | 2.8 (1 row) ``` Financial calculations often require banker's rounding (also known as round-to-even) to minimize bias. If you need this behavior, you can implement it using a custom function or by combining `round()` with other functions. ### Performance implications The `round()` function is generally fast, but frequent use in large datasets might impact performance. If you need to round values frequently in queries, consider storing pre-rounded values in a separate column and creating a function index on it. ### Alternative functions - `ceil()` and `floor()`: These functions round up or down to the nearest integer, respectively. - `trunc()`: This function truncates a number to a specified number of decimal places without rounding. ## Resources - [PostgreSQL documentation: Mathematical Functions and Operators](https://www.postgresql.org/docs/current/functions-math.html) - [PostgreSQL documentation: Numeric Types](https://www.postgresql.org/docs/current/datatype-numeric.html) --- # Source: https://neon.com/llms/functions-max.txt # Postgres max() function > The document explains the usage of the Postgres `max()` function within Neon, detailing its syntax and application for retrieving the maximum value from a set of values in a database query. ## Source - [Postgres max() function HTML](https://neon.com/docs/functions/max): The original HTML version of this documentation You can use the Postgres `max()` function to find the maximum value in a set of values. It's particularly useful for data analysis, reporting, and finding extreme values within datasets. You might use `max()` to find the product with the highest price in the catalog, the most recent timestamp in a log table, or the largest transaction amount in a financial system. ## Function signature The `max()` function has this simple form: ```sql max(expression) -> same as expression ``` - `expression`: Any valid expression that can be evaluated across a set of rows. This can be a column name or a function that returns a value. ## Example usage Consider an `orders` table that tracks orders placed by customers of an online store. It has columns `order_id`, `customer_id`, `product_id`, and `order_date`. We will use this table for examples throughout this guide. ```sql CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, product_id INTEGER, order_amount DECIMAL(10, 2) NOT NULL, order_date TIMESTAMP NOT NULL ); INSERT INTO orders (customer_id, product_id, order_amount, order_date) VALUES (1, 101, 150.00, '2023-01-15 10:30:00'), (2, 102, 75.50, '2023-01-16 11:45:00'), (1, 103, 200.00, '2023-02-01 09:15:00'), (3, 104, 50.25, '2023-02-10 14:20:00'), (2, 105, 125.75, '2023-03-05 16:30:00'), (4, NULL, 90.00, '2023-03-10 13:00:00'), (1, 106, 180.50, '2023-04-02 11:10:00'), (3, 107, 60.25, '2023-04-15 10:45:00'), (5, 108, 110.00, '2023-05-01 15:20:00'), (2, 109, 95.75, '2023-05-20 12:30:00'); ``` We can use `max()` to find the largest order amount: ```sql SELECT max(order_amount) AS largest_order FROM orders; ``` This query returns the following output: ```text largest_order --------------- 200.00 (1 row) ``` To find the most recent order date, we compute the maximum value of `order_date`: ```sql SELECT max(order_date) AS latest_order_date FROM orders; ``` This query returns the following output: ```text latest_order_date --------------------- 2023-05-20 12:30:00 (1 row) ``` ## Advanced examples ### Using max() with GROUP BY You can use `max()` with `GROUP BY` to find the maximum values in each group: ```sql SELECT customer_id, max(order_amount) AS largest_order FROM orders GROUP BY customer_id ORDER BY largest_order DESC LIMIT 5; ``` This query finds the largest order amount for each customer and returns the top 5 customers, sorted in order of the largest order amount. ```text customer_id | largest_order -------------+--------------- 1 | 200.00 2 | 125.75 5 | 110.00 4 | 90.00 3 | 60.25 (5 rows) ``` ### Using max() with a FILTER clause The `FILTER` clause allows you to selectively include rows in the `max()` calculation: ```sql SELECT max(order_amount) AS max_overall, max(order_amount) FILTER (WHERE EXTRACT(MONTH FROM order_date) = 4) AS max_in_april FROM orders; ``` This query calculates both the overall maximum order amount and the maximum order amount for the year 2023. ```text max_overall | max_in_april -------------+-------------- 200.00 | 180.50 (1 row) ``` ### Finding the row with the maximum value for a column To retrieve the entire row containing the maximum value, you can use a subquery: ```sql SELECT * FROM orders WHERE order_amount = (SELECT max(order_amount) FROM orders); ``` This query returns the full details of the order with the maximum `order_amount`. ```text order_id | customer_id | product_id | order_amount | order_date ----------+-------------+------------+--------------+--------------------- 3 | 1 | 103 | 200.00 | 2023-02-01 09:15:00 (1 row) ``` ### Using max() with window functions `max()` can be used as a window function to calculate the running maximum over a set of rows: ```sql SELECT order_id, order_date, max(order_amount) OVER ( ORDER BY order_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS running_max_amount FROM orders ORDER BY order_date; ``` This query calculates the running maximum order amount over time, showing how the largest order amount changes as new orders come in. ```text order_id | order_date | running_max_amount ----------+---------------------+-------------------- 1 | 2023-01-15 10:30:00 | 150.00 2 | 2023-01-16 11:45:00 | 150.00 3 | 2023-02-01 09:15:00 | 200.00 4 | 2023-02-10 14:20:00 | 200.00 5 | 2023-03-05 16:30:00 | 200.00 6 | 2023-03-10 13:00:00 | 200.00 7 | 2023-04-02 11:10:00 | 200.00 8 | 2023-04-15 10:45:00 | 200.00 9 | 2023-05-01 15:20:00 | 200.00 10 | 2023-05-20 12:30:00 | 200.00 (10 rows) ``` ## Additional considerations ### NULL values `max()` ignores NULL values in its calculations. If all values in the set are NULL, `max()` returns NULL. ### Performance implications When used with an index on the column being evaluated, `max()` is typically very efficient. The database can often use an index scan to quickly find the maximum value without needing to examine every row in the table. For large datasets, ensure that the column used in the `max()` function is properly indexed to maintain good performance. ### Alternative functions - `min()`: Returns the minimum value in a set of values. - `greatest()`: Returns the largest value from a list of values/expressions within a single row. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) --- # Source: https://neon.com/llms/functions-now.txt # Postgres now() function > The document details the usage of the Postgres `now()` function, explaining how it retrieves the current date and time in the context of Neon databases. ## Source - [Postgres now() function HTML](https://neon.com/docs/functions/now): The original HTML version of this documentation The Postgres `now()` function returns the current date and time with timezone. It's an alias for the `current_timestamp()` function. This function is commonly used for timestamping database entries, calculating time differences, or implementing time-based logic in applications. For instance, you might use it to record when a user creates an account, when an order is placed, or to calculate intervals - like how long ago an event occurred. ## Function signature The `now()` function has a single form: ```sql now() -> timestamp with timezone ``` This form returns the current timestamp with the timezone at the start of the current transaction. ## Example usage Let's consider a `user_accounts` table that tracks user registration information. We can use `now()` to record the exact time a user creates their account. ```sql CREATE TABLE user_accounts ( user_id SERIAL PRIMARY KEY, username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(100) UNIQUE NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT now() ); INSERT INTO user_accounts (username, email) VALUES ('john_doe', 'john@example.com'); ``` This query creates a table to store user account information, with the `created_at` column automatically set to the current timestamp when a new record is inserted. Let's insert another record and retrieve all user accounts: ```sql INSERT INTO user_accounts (username, email) VALUES ('jane_smith', 'jane@example.com'); SELECT * FROM user_accounts; ``` This query returns the following output: ```text user_id | username | email | created_at ---------+------------+------------------+------------------------------- 1 | john_doe | john@example.com | 2024-06-25 08:40:25.603165+00 2 | jane_smith | jane@example.com | 2024-06-25 08:40:38.220631+00 (2 rows) ``` ## Advanced examples ### Use `now()` to calculate time differences We can use `now()` in combination with stored timestamps to calculate time differences. For example, let's create a table to track project deadlines and calculate how much time is left: ```sql CREATE TABLE projects ( project_id SERIAL PRIMARY KEY, project_name VARCHAR(100) NOT NULL, start_date TIMESTAMP WITH TIME ZONE DEFAULT now(), deadline TIMESTAMP WITH TIME ZONE NOT NULL ); INSERT INTO projects (project_name, deadline) VALUES ('Website Redesign', now() + INTERVAL '30 days'), ('Mobile App Development', now() + INTERVAL '60 days'), ('Database Migration', now() + INTERVAL '15 days'); SELECT project_name, deadline - now() AS time_remaining FROM projects ORDER BY time_remaining; ``` This query calculates and displays the remaining time for each project, ordered from the most to the least urgent. ```text project_name | time_remaining ------------------------+------------------------ Database Migration | 14 days 23:59:59.93332 Website Redesign | 29 days 23:59:59.93332 Mobile App Development | 59 days 23:59:59.93332 (3 rows) ``` ### Use `now()` with triggers We can use `now()` in combination with an update trigger to automatically maintain modification timestamps for records. Here's an example using a table for tracking customer orders. It has columns for both the creation and last update timestamps, with a trigger that updates the `last_updated` column whenever an order is modified: ```sql CREATE TABLE customer_orders ( order_id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, order_status VARCHAR(20) NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT now(), last_updated TIMESTAMP WITH TIME ZONE DEFAULT now() ); CREATE OR REPLACE FUNCTION update_last_updated_column() RETURNS TRIGGER AS $$ BEGIN NEW.last_updated = now(); RETURN NEW; END; $$ language 'plpgsql'; CREATE TRIGGER update_customer_order_timestamp BEFORE UPDATE ON customer_orders FOR EACH ROW EXECUTE FUNCTION update_last_updated_column(); INSERT INTO customer_orders (customer_id, order_status) VALUES (1001, 'Pending'), (1002, 'Processing'); ``` Now, let's update an order and observe the changes: ```sql -- Simulate some delay before update SELECT pg_sleep(2); UPDATE customer_orders SET order_status = 'Shipped' WHERE order_id = 1; SELECT * FROM customer_orders; ``` This query returns the following output, showing the updated status and the new `last_updated` timestamp, for the modified order. ```text order_id | customer_id | order_status | created_at | last_updated ----------+-------------+--------------+------------------------------+------------------------------- 2 | 1002 | Processing | 2024-06-25 09:26:43.57742+00 | 2024-06-25 09:26:43.57742+00 1 | 1001 | Shipped | 2024-06-25 09:26:43.57742+00 | 2024-06-25 09:26:50.962194+00 (2 rows) ``` ### Use `now()` in a function for date/time calculations We can wrap `now()` in a user-defined function to perform more complex date/time calculations. For example, here's a function that calculates the current age of a user. ```sql CREATE OR REPLACE FUNCTION calculate_age(birth_date DATE) RETURNS INTEGER AS $$ BEGIN RETURN DATE_PART('year', AGE(now(), birth_date)); END; $$ LANGUAGE plpgsql; SELECT calculate_age('1990-05-15') AS age_1, calculate_age('2000-12-31') AS age_2, calculate_age('1985-03-20') AS age_3; ``` This query calculates the age of three users based on their date of birth: ```text age_1 | age_2 | age_3 -------+-------+------- 34 | 23 | 39 (1 row) ``` ## Additional considerations ### Time zone awareness Like `current_timestamp`, `now()` returns a value in the timezone of the current session. This defaults to the server's timezone unless explicitly set in the session. It's important to keep this in mind when working with timestamps across different timezones. ### Difference between `now()` and the keyword `now` The `now()` function is a built-in function that returns the current timestamp with the timezone. In contrast, the keyword `now` (without parentheses) is a reserved word that is converted to the current timestamp value when first parsed. It is recommended to use `now()` for clarity and consistency. For example, if the default value for a column is set to `now`, it will be evaluated once when the table is created and reused for all successive records. Whereas, `now()` will be evaluated each time a new row is inserted, which is the typically desired behavior. ### Alternative functions - `current_timestamp()` - Functionally identical to `now()`. - `transaction_timestamp()` - Returns the current timestamp at the start of the current transaction, also equivalent to `now()`. - `statement_timestamp()` - Returns the current timestamp at the start of the current statement. - `clock_timestamp()` - Returns the actual current timestamp with timezone, which can change even during a single SQL statement. ## Resources - [PostgreSQL documentation: Date/Time Functions and Operators](https://www.postgresql.org/docs/current/functions-datetime.html) - [PostgreSQL documentation: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html) --- # Source: https://neon.com/llms/functions-regexp_match.txt # Postgres regexp_match() function > The document explains the usage of the Postgres `regexp_match()` function in Neon, detailing its syntax and examples for extracting substrings that match a specified regular expression pattern within a string. ## Source - [Postgres regexp_match() function HTML](https://neon.com/docs/functions/regexp_match): The original HTML version of this documentation The Postgres `regexp_match()` function is used to extract substrings that match a regular expression pattern from a given string. It returns an array of matching substrings, including capture groups if specified in the pattern. This function is particularly useful for complex string parsing tasks, such as extracting structured information from semi-structured text data. For example, it can be used to parse log files, extract specific components from URLs, or analyze text data for specific patterns. ## Function signature The `regexp_match()` function has the following form: ```sql regexp_match(string text, pattern text [, flags text]) -> text[] ``` - `string`: The input string to search for matches. - `pattern`: A POSIX regular expression pattern to match against the string. - `flags` (optional): A string of one or more single-letter flags that modify how the regular expression is interpreted. The function returns an array of text values, where each element corresponds to a substring within the first match of the pattern in the input string. If there are no matches, the function returns NULL. If there are no capture groups in the pattern, the array contains a single element with the full match. ## Example usage Consider a table `log_entries` with a `log_text` column containing log messages. We can use `regexp_match()` to extract specific information from these logs. ```sql WITH log_entries AS ( SELECT '[2024-03-04 10:15:30] INFO: User john_doe logged in from 192.168.1.100' AS log_text UNION ALL SELECT '[2024-03-04 10:20:45] ERROR: Failed login attempt for user jane_smith from 10.0.0.50' AS log_text UNION ALL SELECT '[2024-03-04 10:25:55] INFO: User admin logged out' AS log_text ) SELECT regexp_match(log_text, '\[(.*?)\] (\w+): (.*)$') AS parsed_log FROM log_entries; ``` This query extracts the timestamp, log level, and message from each log entry. The regular expression pattern `\[(.*?)\] (\w+): (.*)$` captures three groups: 1. The timestamp between square brackets 2. The log level (INFO, ERROR, etc.), which is alphabetical and terminated with a colon 3. The rest of the message ```text parsed_log ----------------------------------------------------------------------------------------- {"2024-03-04 10:15:30",INFO,"User john_doe logged in from 192.168.1.100"} {"2024-03-04 10:20:45",ERROR,"Failed login attempt for user jane_smith from 10.0.0.50"} {"2024-03-04 10:25:55",INFO,"User admin logged out"} (3 rows) ``` ## Advanced examples ### Use `regexp_match()` with regex flags The `regexp_match()` function accepts optional flags to modify how the regular expression is interpreted. Here's an example using the 'i' flag for case-insensitive matching: ```sql WITH user_agents AS ( SELECT 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36' AS user_agent UNION ALL SELECT 'Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Mobile/15E148 Safari/604.1' AS user_agent UNION ALL SELECT 'CHROME/91.0.4472.124' AS user_agent ) SELECT regexp_match(user_agent, '(chrome|safari|firefox|msie|opera)\/[\d\.]+', 'i') AS browser FROM user_agents; ``` This query extracts the browser name and version from user agent strings, using case-insensitive matching. ```text browser ---------- {Chrome} {Safari} {CHROME} (3 rows) ``` ### Use `regexp_match()` in a WHERE clause You can use `regexp_match()` in a WHERE clause to filter rows based on a regex pattern: ```sql WITH emails AS ( SELECT 'john.doe@example.com' AS email UNION ALL SELECT 'jane.smith@company.co.uk' AS email UNION ALL SELECT 'support@mydomain.io' AS email ) SELECT * FROM emails WHERE regexp_match(email, '^[^@]+@[^@]+\.(com|org|io)$') IS NOT NULL; ``` This query selects all rows from the `emails` table where the email address ends with `.com`, ``.org`, or `.io`. ```text email ---------------------- john.doe@example.com support@mydomain.io (2 rows) ``` ## Additional considerations ### Performance implications Using `regexp_match()` can be computationally expensive, especially on large datasets or with complex patterns. For better performance: 1. Use simpler patterns when possible. 2. Consider using `LIKE` or `SIMILAR TO` for simple pattern matching. 3. If you frequently filter based on regex patterns, consider creating a functional index using the `regexp_match()` expression. ### NULL handling `regexp_match()` returns NULL if there's no match or if the input string is NULL. This behavior can be useful in `WHERE` clauses but may require careful handling in `SELECT` lists. ### Alternative functions - `regexp_matches()`: Returns a set of all matches, useful for extracting multiple occurrences of the pattern in the input string. - `regexp_replace()`: Replaces substrings matching a regex pattern within a specified string. - `regexp_split_to_array()`: Splits a string using a regex pattern as the delimiter and returns the result as an array. - `substring()`: Extracts substrings based on a regex pattern similar to `regexp_match()`, but only returns the first captured group of the match. ## Resources - [PostgreSQL documentation: Pattern Matching](https://www.postgresql.org/docs/current/functions-matching.html) - [PostgreSQL documentation: Regular Expression Details](https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP) - [Regular Expression Tester](https://regex101.com/): A useful tool for testing and debugging regular expressions --- # Source: https://neon.com/llms/functions-regexp_replace.txt # Postgres regexp_replace() function > The document details the usage of the Postgres `regexp_replace()` function within Neon, explaining its syntax and parameters for replacing substrings that match a regular expression pattern in a given string. ## Source - [Postgres regexp_replace() function HTML](https://neon.com/docs/functions/regexp_replace): The original HTML version of this documentation The Postgres `regexp_replace()` function replaces substrings that match a regular expression pattern with the specified replacement string. This function is particularly useful for complex string manipulations, and data cleaning/formatting tasks. Consider scenarios where you'd want to remove or replace specific patterns in text or transform data to meet certain requirements. For instance, you might use it to format phone numbers consistently, remove HTML tags from text, or anonymize sensitive information in logs. ## Function signature The `regexp_replace()` function has the following syntax: ```sql regexp_replace(source text, pattern text, replacement text [, flags text]) -> text ``` - `source`: The input string to perform replacements on. - `pattern`: The regular expression pattern to match. - `replacement`: The string to replace matched substrings with. - `flags` (optional): A string of one or more single-letter flags that modify how the regex is interpreted. It returns the input string with occurrence(s) of the pattern replaced by the replacement string. More recent versions of Postgres (starting with Postgres 16) also support additional parameters to further control the replacement operation: ```sql regexp_replace(source text, pattern text, replacement text [, start int, [, N int]] [, flags text]) -> text ``` - start: The position in the source string to start searching for matches (default is 1). - N: If specified, only the Nth occurrence of the pattern is replaced. If N is 0, or the `g` flag is used, all occurrences are replaced. ## Example usage Consider a `customer_data` table with a `phone_number` column containing phone numbers in different formats. We can use `regexp_replace()` to standardize these numbers to a consistent format. ```sql WITH customer_data AS ( SELECT '(555) 123-4567' AS phone_number UNION ALL SELECT '555.987.6543' AS phone_number UNION ALL SELECT '555-321-7890' AS phone_number ) SELECT phone_number AS original_number, regexp_replace(phone_number, '[^\d]', '', 'g') AS cleaned_number FROM customer_data; ``` This query removes all non-digit characters from the phone numbers, standardizing them to a simple string of digits. ```text original_number | cleaned_number -----------------+---------------- (555) 123-4567 | 5551234567 555.987.6543 | 5559876543 555-321-7890 | 5553217890 (3 rows) ``` ## Advanced examples ### Use `regexp_replace()` with backreferences You can use backreferences in the replacement string to include parts of the matched pattern in the replacement. ```sql WITH log_data AS ( SELECT '2023-05-15 10:30:00 - User john.doe@example.com logged in' AS log_entry UNION ALL SELECT '2023-05-15 11:45:30 - User jane.smith@example.org logged out' AS log_entry ) SELECT log_entry AS original_log, regexp_replace(log_entry, '(.*) - User (.+@.+) (.+)$', '\1 - User [REDACTED] \3') AS anonymized_log FROM log_data; ``` This query anonymizes email addresses in log entries by replacing them with [REDACTED] while preserving the rest of the log structure. ```text original_log | anonymized_log --------------------------------------------------------------+------------------------------------------- 2023-05-15 10:30:00 - User john.doe@example.com logged in | 2023-05-15 10:30:00 - User [REDACTED] in 2023-05-15 11:45:30 - User jane.smith@example.org logged out | 2023-05-15 11:45:30 - User [REDACTED] out (2 rows) ``` ### Modify the behavior of `regexp_replace()` using flags The `flags` parameter allows you to modify how the function operates. Common flags include: - `g`: Global replacement (replace all occurrences) - `i`: Case-insensitive matching - `n`: Newline-sensitive matching ```sql WITH product_descriptions AS ( SELECT 'Red Apple: sweet and crisp' AS description UNION ALL SELECT 'Green Apple: tart and juicy apple' AS description UNION ALL SELECT 'Yellow Apple: mild and sweet' AS description ) SELECT description AS original_description, regexp_replace(description, 'apple', 'pear', 'gi') AS modified_description FROM product_descriptions; ``` This query replaces all occurrences of "apple" (case-insensitive) with "pear" in the product descriptions. ```text original_description | modified_description -----------------------------------+--------------------------------- Red Apple: sweet and crisp | Red pear: sweet and crisp Green Apple: tart and juicy apple | Green pear: tart and juicy pear Yellow Apple: mild and sweet | Yellow pear: mild and sweet (3 rows) ``` ### Use `regexp_replace()` for complex pattern matching and replacement `regexp_replace()` can handle complex patterns for sophisticated text processing tasks. For example, the query below removes all HTML tags from the given markup, producing plain text. ```sql WITH html_content AS ( SELECT '

This is bold and italic text.

' AS content UNION ALL SELECT '
Another example here.
' AS content ) SELECT content AS original_html, regexp_replace(content, '<[^>]+>', '', 'g') AS plain_text FROM html_content; ``` This query produces the following output: ```text original_html | plain_text -------------------------------------------------------------------+-------------------------------

This is bold and italic text.

| This is bold and italic text.
Another example here.
| Another example here. (2 rows) ``` ## Additional considerations ### Performance implications While `regexp_replace()` is powerful, complex regular expressions or operations on large text fields can be computationally expensive. For frequently used operations, consider preprocessing the data or using simpler string functions if possible. ### Alternative functions - `replace()`: A simpler function for straightforward string replacements without regular expressions. - `translate()`: Useful for character-by-character replacements. - `regexp_matches()`: Returns an array of all substrings matching a regular expression pattern, which can be useful in conjunction with other functions for complex transformations. ## Resources - [PostgreSQL documentation: String functions](https://www.postgresql.org/docs/current/functions-string.html) - [PostgreSQL documentation: Pattern matching](https://www.postgresql.org/docs/current/functions-matching.html) - [PostgreSQL documentation: Regular expressions](https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP) --- # Source: https://neon.com/llms/functions-substring.txt # Postgres substring() function > The document details the usage of the Postgres `substring()` function within Neon, explaining its syntax and parameters for extracting substrings from text data. ## Source - [Postgres substring() function HTML](https://neon.com/docs/functions/substring): The original HTML version of this documentation The `substring()` function in Postgres is used to extract a portion of a string based on specified start and end positions, or a regular expression pattern. It's useful for data cleaning and transformation where you might need to extract relevant parts of a string. For example, when working with semi-structured data like an address, where you want to extract the zip code. Or, to extract the timestamp of events when working with machine-generated data like logs. ## Function signature The `substring()` function has two forms: ```sql substring(string text [from int] [for int]) -> text ``` - `string`: The input string to extract the substring from. - `from` (optional): The starting position for the substring (1-based index). If omitted, it defaults to 1. - `for` (optional): The length of the substring to extract. If omitted, the substring extends to the end of the string. ```sql substring(string text from pattern text) -> text ``` - `string`: The input string to extract the substring from. - `pattern`: A POSIX regular expression pattern. The substring matching this pattern is returned. ## Example usage Consider a table `users` with a `user_id` column that contains IDs in the format "user_123". We can use `substring()` to extract just the numeric part of the ID. ```sql WITH users AS ( SELECT 'user_123' AS user_id UNION ALL SELECT 'user_482892' AS user_id ) SELECT substring(user_id from 6) AS numeric_id FROM users; ``` This query extracts the substring starting from the 6th character of `user_id` (1-based index) and returns it as `numeric_id`. ```text numeric_id ------------ 123 482892 (2 rows) ``` You can also use a regular expression pattern to find and extract a substring. ```sql WITH addresses AS ( SELECT '123 Main St, Anytown, CA 12345, (555) 123-4567' AS address UNION ALL SELECT '456 Oak Ave, Somewhere, NY 54321, (555) 987-6543' AS address ) SELECT substring(address from '\d{5}') AS zip_code FROM addresses; ``` This query extracts the 5-digit zip code from the `address` column using the regular expression pattern `\d{5}`, which matches exactly 5 consecutive digits. ```text zip_code ---------- 12345 54321 (2 rows) ``` ## Advanced examples ### Extract a substring of a specific length You can specify both the starting position and the length of the substring to extract. ```sql WITH logs AS ( SELECT '2023-05-15T10:30:00.000Z - User 123 logged in' AS log_entry UNION ALL SELECT '2023-05-15T11:45:30.000Z - User 456 logged out' AS log_entry ) SELECT substring(log_entry from 1 for 23) AS timestamp FROM logs; ``` This query extracts the timestamp portion from the `log_entry` column. It assumes that the timestamp always appears at the beginning of the log entry and has a fixed length of 23 characters ```text timestamp ------------------------- 2023-05-15T10:30:00.000 2023-05-15T11:45:30.000 (2 rows) ``` ### Extract a substring matching a regex pattern with capture groups The `substring()` function extracts the first part of the string that matches the regular expression pattern. However, if the pattern contains capture groups (specified using parentheses), it returns the substring matched by the first parenthesized subexpression. ```sql WITH orders AS ( SELECT 'Order #1234 - $150.00' AS order_info UNION ALL SELECT 'Order #5678 - $75.50' AS order_info UNION ALL SELECT 'Order #9012 - $200.00' AS order_info ) SELECT substring(order_info from 'Order #(\d+)') AS order_number, substring(order_info from '\$(\d+\.\d+)') AS order_amount FROM orders; ``` This query extracts the order number and order amount from the `order_info` column using regular expressions with capture groups. - The pattern `Order #(\d+)` matches the string "Order #" followed by one or more digits. The parentheses around `\d+` create a capture group that extracts just the order number. - The pattern `\$(\d+\.\d+)` matches a dollar sign followed by a decimal number. The parentheses around `\d+\.\d+` create a capture group that extracts just the order amount. ```text order_number | order_amount --------------+-------------- 1234 | 150.00 5678 | 75.50 9012 | 200.00 (3 rows) ``` ### Use `substring()` in a `WHERE` clause You can use `substring()` in a `WHERE` clause to filter rows based on a substring condition. ```sql WITH users AS ( SELECT 'john.doe@example.com' AS email UNION ALL SELECT 'jane.smith@example.org' AS email UNION ALL SELECT 'admin@gmail.com' AS email ) SELECT * FROM users WHERE substring(email from '.*@(.*)\.') = 'example'; ``` This query selects all rows from the `users` table where the email address has the domain name `example`. The regular expression pattern `.*@(.*)\.` extracts the domain part of the email address. ```text email ------------------------ john.doe@example.com jane.smith@example.org (2 rows) ``` ## Additional considerations ### Performance implications When working with large datasets, using `substring()` in a `WHERE` clause may impact query performance since it requires scanning the entire string column to extract substrings and compare them. If you frequently filter based on substrings, consider creating a _functional index_ on the relevant column using the substring expression, to improve query performance. ### Alternative functions - `left` - Extracts the specified number of characters from the start of a string. - `right` - Extracts the specified number of characters from the end of a string. - `split_part` - Splits a string on the specified delimiter and returns the nth substring. - `regexp_match` - Extracts the first substring matching a regular expression pattern. Unlike `substring()`, it returns an array of all the captured substrings when the regex pattern contains multiple parentheses. ## Resources - [PostgreSQL documentation: String functions](https://www.postgresql.org/docs/current/functions-string.html) - [PostgreSQL documentation: Pattern matching](https://www.postgresql.org/docs/current/functions-matching.html) --- # Source: https://neon.com/llms/functions-sum.txt # Postgres sum() function > The document details the usage of the Postgres `sum()` function within Neon, explaining its syntax and application for aggregating numerical data in a database. ## Source - [Postgres sum() function HTML](https://neon.com/docs/functions/sum): The original HTML version of this documentation The Postgres `sum()` function calculates the total of a set of numeric values. It's used in data analysis and reporting to compute totals across rows in a table or grouped data. This function is particularly useful in financial applications for calculating total revenue or expenses, in inventory management for summing up quantities, or in analytics for aggregating metrics across various dimensions. ## Function signature The `sum()` function has this simple form: ```sql sum([DISTINCT] expression) -> numeric type ``` - `expression`: Any numeric expression or column name. The function returns a value of the same data type as the input. - `DISTINCT`: Optional keyword that causes `sum()` to consider only unique values in the calculation. The output of the `sum()` function has the same data type as the input if it's a floating-point (real / double-precision) type. To avoid overflow, the output for smallint/integer inputs is a bigint, and for bigint/numeric inputs, it is numeric type. ## Example usage Consider a `sales` table that tracks product sales, with columns `product_id`, `quantity`, and `price`. We can use `sum()` to calculate the total revenue from each product. ```sql WITH sales(product_id, quantity, price) AS ( VALUES (1, 10, 100.0), (2, 5, 50.0), (1, 5, 100.0), (3, 3, 75.0), (2, 2, 50.0) ) SELECT sum(quantity * price) AS total_revenue FROM sales; ``` This query calculates the total revenue by multiplying the quantity and price for each sale. ```text total_revenue --------------- 2075.0 (1 row) ``` ## Advanced examples ### Sum with grouping You can use `sum()` with `GROUP BY` to calculate subtotals for different categories: ```sql WITH employee_sales AS ( SELECT 'Alice' AS employee, 'Electronics' AS department, 5000 AS sales UNION ALL SELECT 'Bob' AS employee, 'Electronics' AS department, 6000 AS sales UNION ALL SELECT 'Charlie' AS employee, 'Clothing' AS department, 4500 AS sales UNION ALL SELECT 'David' AS employee, 'Clothing' AS department, 5500 AS sales ) SELECT department, sum(sales) AS total_sales FROM employee_sales GROUP BY department; ``` This query calculates the total sales for each department. ``` department | total_sales -------------+------------- Clothing | 10000 Electronics | 11000 (2 rows) ``` ### Sum with FILTER clause You can use the `FILTER` clause to conditionally include values in the sum: ```sql WITH orders AS ( SELECT 1 AS order_id, 'completed' AS status, 100 AS total UNION ALL SELECT 2 AS order_id, 'pending' AS status, 150 AS total UNION ALL SELECT 3 AS order_id, 'completed' AS status, 200 AS total UNION ALL SELECT 4 AS order_id, 'cancelled' AS status, 75 AS total ) SELECT sum(total) AS all_orders_total, sum(total) FILTER (WHERE status = 'completed') AS completed_orders_total FROM orders; ``` This query calculates the sum of all order totals and the sum of only completed order totals. ```text all_orders_total | completed_orders_total ------------------+------------------------ 525 | 300 (1 row) ``` ### Sum over a window You can use `sum()` as a window function to calculate running totals: ```sql WITH monthly_sales AS ( SELECT '2023-01-01'::date AS month, 10000 AS sales UNION ALL SELECT '2023-02-01'::date, 12000 UNION ALL SELECT '2023-03-01'::date, 15000 UNION ALL SELECT '2023-04-01'::date, 11000 ) SELECT month, sales, sum(sales) OVER (ORDER BY month) AS running_total FROM monthly_sales; ``` This query calculates a running total of sales over time. ```text month | sales | running_total ------------+-------+--------------- 2023-01-01 | 10000 | 10000 2023-02-01 | 12000 | 22000 2023-03-01 | 15000 | 37000 2023-04-01 | 11000 | 48000 (4 rows) ``` ## Additional considerations ### Null values The `sum()` function ignores NULL values in its calculations. If all values are NULL, `sum()` returns NULL. Additionally, if there are no rows to sum over, `sum()` returns NULL instead of 0 which might be unexpected. ### Overflow handling When summing very large numbers, be aware of potential overflow issues. Consider using larger data types (e.g., `bigint` instead of `integer`) or the `numeric` type for precise calculations with large numbers. ### Alternative functions - `avg()`: Calculates the average of a set of values. - `count()`: Counts the number of rows or non-null values. - `max()` and `min()`: Find the maximum and minimum in a set of values. ## Resources - [PostgreSQL documentation: Aggregate Functions](https://www.postgresql.org/docs/current/functions-aggregate.html) - [PostgreSQL documentation: Window Functions](https://www.postgresql.org/docs/current/tutorial-window.html) --- # Source: https://neon.com/llms/functions-trim.txt # Postgres trim() function > The document explains the usage of the Postgres `trim()` function within Neon, detailing how to remove specified characters from the beginning and end of a string in a database query. ## Source - [Postgres trim() function HTML](https://neon.com/docs/functions/trim): The original HTML version of this documentation The Postgres `trim()` function removes the specified characters from the beginning and/or end of a string. This function is commonly used in data preprocessing tasks, such as cleaning user input before storing it in a database or standardizing data for comparison or analysis. For example, you might use it to remove extra spaces from product names or to standardize phone numbers by removing surrounding parentheses. ## Function signature The `trim()` function has two forms: ```sql trim([leading | trailing | both] [characters] from string) -> text ``` - `leading | trailing | both` (optional): Specifies which part of the string to trim. If omitted, it defaults to `both`. - `characters` (optional): The set of characters to remove. If omitted, it defaults to spaces. - `string`: The input string to trim. ```sql trim(string text [, characters text]) -> text ``` - `string`: The input string to trim. - `characters` (optional): The characters to remove from both ends. If omitted, it defaults to spaces. ## Example usage Consider a table `products` with a `product_name` column that contains product names with inconsistent spacing. We can use `trim()` to standardize these names. ```sql WITH products(product_name) AS ( VALUES (' Laptop '), ('Smartphone '), (' Tablet'), (' Wireless Earbuds ') ) SELECT trim(product_name) AS cleaned_name FROM products; ``` This query removes leading and trailing spaces from the `product_name` column. ```text cleaned_name ------------------ Laptop Smartphone Tablet Wireless Earbuds (4 rows) ``` You can also use `trim()` to remove specific characters from both ends of a string. ```sql WITH order_ids(id) AS ( VALUES ('###ORDER-123###'), ('###ORDER-456###'), ('###ORDER-789###') ) SELECT trim(id, '#') AS cleaned_id FROM order_ids; ``` This query removes the '#' characters from both ends of the `id` column. ```text cleaned_id ------------ ORDER-123 ORDER-456 ORDER-789 (3 rows) ``` ## Advanced examples ### Trim only leading or trailing characters You can specify whether to trim characters from the beginning, end, or both sides of a string. ```sql WITH user_inputs(input) AS ( VALUES ('***Secret Password***'), ('***Admin Access***'), ('***Guest User***') ) SELECT trim(leading '*' from input) AS leading_trimmed, trim(trailing '*' from input) AS trailing_trimmed, trim(both '*' from input) AS both_trimmed FROM user_inputs; ``` The query above demonstrates trimming asterisks from the beginning, end, and both sides of the `input` column, as shown in the following table. ```text leading_trimmed | trailing_trimmed | both_trimmed --------------------+--------------------+----------------- Secret Password*** | ***Secret Password | Secret Password Admin Access*** | ***Admin Access | Admin Access Guest User*** | ***Guest User | Guest User (3 rows) ``` ### Use trim() in a WHERE clause You can use `trim()` in a `WHERE` clause to filter rows based on matching a trimmed value. ```sql WITH product_codes(code) AS ( VALUES (' ABC-123 '), ('DEF-456'), (' ABC-789 '), (' JKL-101 '), ('MNO-202 ') ) SELECT code AS original_code, trim(code) AS trimmed_code FROM product_codes WHERE trim(code) LIKE 'ABC%'; ``` The query above filters for rows where the trimmed `code` column starts with 'ABC', as shown in the following table: ```text original_code | trimmed_code ---------------+-------------- ABC-123 | ABC-123 ABC-789 | ABC-789 (2 rows) ``` ### Combine trim() with other string functions You can combine `trim()` with other string functions for more complex string manipulations. ```sql WITH user_emails(email) AS ( VALUES (' john.doe@example.com '), (' jane.smith@example.org '), (' admin@gmail.com ') ) SELECT trim(email) AS trimmed_email, upper(split_part(trim(email), '@', 1)) AS username FROM user_emails; ``` The query above trims spaces from the email addresses and then extracts and uppercases the username part (before the '@' symbol). ```text trimmed_email | username ------------------------+------------ john.doe@example.com | JOHN.DOE jane.smith@example.org | JANE.SMITH admin@gmail.com | ADMIN (3 rows) ``` ## Additional considerations ### Performance implications While `trim()` is generally efficient, using it extensively on large datasets, especially in `WHERE` clauses, may impact query performance. If you frequently filter or join based on trimmed values, consider creating a functional index on the trimmed column. ### Handling NULL values The `trim()` function returns NULL if the input string is NULL. Be aware of this when working with potentially NULL columns to avoid unexpected results. ### Alternative functions - `ltrim()` - Removes specified characters from the beginning (left side) of a string. - `rtrim()` - Removes specified characters from the end (right side) of a string. - `btrim()` - Removes specified characters from both the beginning and end of a string. - `regexp_replace()` - Can be used for more complex trimming operations using regular expressions. ## Resources - [PostgreSQL documentation: String functions and operators](https://www.postgresql.org/docs/current/functions-string.html) - [PostgreSQL documentation: Pattern matching](https://www.postgresql.org/docs/current/functions-matching.html) --- # Source: https://neon.com/llms/functions-window-lag.txt # Postgres lag() window function > The document explains the usage of the Postgres `lag()` window function within Neon, detailing its syntax and application for accessing data from preceding rows in a result set. ## Source - [Postgres lag() window function HTML](https://neon.com/docs/functions/window-lag): The original HTML version of this documentation The `lag()` function in Postgres is a window function that allows you to access values from previous rows in a result set without the need for a self-join. It's useful for comparing values between the current row and a previous row, for example, when calculating running differences, plotting trends, or doing time series analysis. ## Function signature The `lag()` function has the following forms: ```sql lag(value any [, offset integer [, default any ]]) over (...) ``` - `value`: The value to return from the previous row. This can be a column, expression, or subquery. - `offset` (optional): The number of rows back from the current row to retrieve the value from. If omitted, it defaults to 1. Must be a non-negative integer. - `default` (optional): The value to return when the offset goes beyond the scope of the window. If omitted, it defaults to null. - `over (...)`: The `OVER` clause defines the window frame for the function. It can be an empty `OVER ()`, or it can include a `PARTITION BY` and/or `ORDER BY` clause. ## Example usage Consider a table `sales` that contains daily sales data for a company. We can use `lag()` to compare each day's sales to the previous day's sales. ```sql WITH sales AS ( SELECT date '2023-01-01' AS sale_date, 1000 AS amount UNION ALL SELECT date '2023-01-02' AS sale_date, 1500 AS amount UNION ALL SELECT date '2023-01-03' AS sale_date, 1200 AS amount UNION ALL SELECT date '2023-01-04' AS sale_date, 1800 AS amount ) SELECT sale_date, amount, lag(amount) OVER (ORDER BY sale_date) AS prev_amount, amount - lag(amount) OVER (ORDER BY sale_date) AS diff FROM sales; ``` This query calculates the previous day's sales amount (`prev_amount`) and the difference between the current day's sales and the previous day's sales (`diff`). The `OVER` clause specifies that the window frame should be ordered by `sale_date`. ```text sale_date | amount | prev_amount | diff ------------+--------+-------------+------- 2023-01-01 | 1000 | | 2023-01-02 | 1500 | 1000 | 500 2023-01-03 | 1200 | 1500 | -300 2023-01-04 | 1800 | 1200 | 600 (4 rows) ``` You can also use `lag()` to access values from rows further back by specifying an offset. For example, to compare each day's sales to the sales from the same day of the previous week: ```sql WITH sales AS ( SELECT sale_date, floor(random() * 1000 + 1)::int AS amount FROM generate_series(date '2023-01-01', date '2023-01-31', interval '1 day') AS sale_date ) SELECT sale_date, amount, lag(amount, 7) OVER (ORDER BY sale_date) AS prev_week_amount, amount - lag(amount, 7) OVER (ORDER BY sale_date) AS diff FROM sales ORDER BY sale_date DESC LIMIT 5; ``` This query generates random sales data for each day in January 2023 and compares each day's sales to the sales from the same day of the previous week. The `lag()` function with an offset of 7 retrieves the sales amount from 7 days ago. ```text sale_date | amount | prev_week_amount | diff ------------------------+--------+------------------+------ 2023-01-31 00:00:00+00 | 245 | 64 | 181 2023-01-30 00:00:00+00 | 736 | 789 | -53 2023-01-29 00:00:00+00 | 208 | 763 | -555 2023-01-28 00:00:00+00 | 710 | 899 | -189 2023-01-27 00:00:00+00 | 1 | 229 | -228 (5 rows) ``` ## Advanced examples ### Using `lag()` with a default value When the offset in `lag()` goes beyond the start of the window frame, it returns null by default. You can specify a default value to use instead, so the resulting column does not contain nulls. ```sql WITH inventory AS ( SELECT date '2023-01-01' AS snapshot_date, 100 AS quantity UNION ALL SELECT date '2023-01-02' AS snapshot_date, 80 AS quantity UNION ALL SELECT date '2023-01-03' AS snapshot_date, 120 AS quantity UNION ALL SELECT date '2023-01-04' AS snapshot_date, 90 AS quantity ) SELECT snapshot_date, quantity, lag(quantity, 1, quantity) OVER (ORDER BY snapshot_date) AS prev_quantity, quantity - lag(quantity, 1, quantity) OVER (ORDER BY snapshot_date) AS change FROM inventory; ``` This query calculates the change in inventory quantity compared to the previous day. For the first row, where there is no previous quantity, it uses the current quantity as the default value, resulting in a change of 0. ```text snapshot_date | quantity | prev_quantity | change ---------------+----------+---------------+-------- 2023-01-01 | 100 | 100 | 0 2023-01-02 | 80 | 100 | -20 2023-01-03 | 120 | 80 | 40 2023-01-04 | 90 | 120 | -30 (4 rows) ``` ### Using `lag()` with partitioning You can use `lag()` with partitioning to perform calculations within groups of rows. ```sql WITH orders AS ( SELECT 1 AS order_id, date '2023-01-01' AS order_date, 100 AS amount, 1 AS customer_id UNION ALL SELECT 2 AS order_id, date '2023-01-02' AS order_date, 150 AS amount, 1 AS customer_id UNION ALL SELECT 3 AS order_id, date '2023-01-03' AS order_date, 200 AS amount, 2 AS customer_id UNION ALL SELECT 4 AS order_id, date '2023-01-04' AS order_date, 120 AS amount, 1 AS customer_id UNION ALL SELECT 5 AS order_id, date '2023-01-05' AS order_date, 180 AS amount, 2 AS customer_id ) SELECT order_id, order_date, amount, customer_id, lag(order_date) OVER (PARTITION BY customer_id ORDER BY order_date) AS prev_order_date, order_date - lag(order_date) OVER (PARTITION BY customer_id ORDER BY order_date) AS days_since_last_order FROM orders; ``` This query calculates the number of days since each customer's previous order. The `OVER` clause partitions the data by `customer_id` and orders it by `order_date` within each partition. ```text order_id | order_date | amount | customer_id | prev_order_date | days_since_last_order ----------+------------+--------+-------------+-----------------+----------------------- 1 | 2023-01-01 | 100 | 1 | | 2 | 2023-01-02 | 150 | 1 | 2023-01-01 | 1 4 | 2023-01-04 | 120 | 1 | 2023-01-02 | 2 3 | 2023-01-03 | 200 | 2 | | 5 | 2023-01-05 | 180 | 2 | 2023-01-03 | 2 (5 rows) ``` ## Additional considerations ### Correctness The `lag()` function relates each row in the result set to a previous row in the same window frame. If the window frame is not explicitly defined, the default frame is the entire result set. Make sure to specify the correct `ORDER BY` and `PARTITION BY` clauses to ensure the desired behavior. ### Performance implications Window functions like `lag()` perform calculations across a set of rows defined by the `OVER` clause. This can be computationally expensive for large datasets or complex window definitions. To optimize performance, make sure to: - Include an `ORDER BY` clause in the `OVER` clause to avoid sorting the entire dataset. - Use partitioning (`PARTITION BY`) to divide the data into smaller chunks when possible. - Create appropriate indexes on the columns used in the `OVER` clause. ### Alternative functions - [lead](https://neon.com/docs/functions/window-lead) - Access values from subsequent rows in a result set. Similar to `lag()` but looks ahead in the partition instead of behind. - `first_value()` - Get the first value within a window frame. - `last_value()` - Get the last value within a window frame. ## Resources - [PostgreSQL documentation: Window functions](https://www.postgresql.org/docs/current/tutorial-window.html) - [PostgreSQL documentation: Lag function](https://www.postgresql.org/docs/current/functions-window.html#FUNCTIONS-WINDOW-TABLE) --- # Source: https://neon.com/llms/functions-window-lead.txt # Postgres lead() window function > The document explains the usage of the Postgres `lead()` window function in Neon, detailing how to retrieve subsequent row values within a result set for advanced data analysis. ## Source - [Postgres lead() window function HTML](https://neon.com/docs/functions/window-lead): The original HTML version of this documentation The `lead()` function in Postgres is a window function that allows you to access values from subsequent rows in a result set without the need for a self-join. It's useful for comparing values between the current row and a later row, for example, when calculating the time until the next event, determining the next event in a sequence, or analyzing trends in time series data. ## Function signature The `lead()` function has the following forms: ```sql lead(value any [, offset integer [, default any ]]) over (...) ``` - `value`: The value to return from the subsequent row. This can be a column, expression, or subquery. - `offset` (optional): The number of rows ahead of the current row to retrieve the value from. If omitted, it defaults to 1. Must be a non-negative integer. - `default` (optional): The value to return when the offset goes beyond the scope of the window. If omitted, it defaults to null. - `over (...)`: The `OVER` clause defines the window frame for the function. It can be an empty `OVER ()`, or it can include a `PARTITION BY` and/or `ORDER BY` clause. ## Example usage Consider a table `shipments` that contains information about product shipments. We can use `lead()` to determine the next scheduled shipment date for each product. ```sql WITH shipments AS ( SELECT 1 AS product_id, date '2023-01-01' AS ship_date UNION ALL SELECT 1 AS product_id, date '2023-01-15' AS ship_date UNION ALL SELECT 2 AS product_id, date '2023-01-05' AS ship_date UNION ALL SELECT 1 AS product_id, date '2023-02-01' AS ship_date UNION ALL SELECT 2 AS product_id, date '2023-01-20' AS ship_date ) SELECT product_id, ship_date, lead(ship_date) OVER (PARTITION BY product_id ORDER BY ship_date) AS next_ship_date, lead(ship_date) OVER (PARTITION BY product_id ORDER BY ship_date) - ship_date AS days_until_next_shipment FROM shipments; ``` This query calculates the next shipment date (`next_ship_date`) and the number of days until the next shipment (`days_until_next_shipment`) for each product. The `OVER` clause partitions the data by `product_id` and orders it by `ship_date` within each partition. ```text product_id | ship_date | next_ship_date | days_until_next_shipment ------------+------------+----------------+-------------------------- 1 | 2023-01-01 | 2023-01-15 | 14 1 | 2023-01-15 | 2023-02-01 | 17 1 | 2023-02-01 | | 2 | 2023-01-05 | 2023-01-20 | 15 2 | 2023-01-20 | | (5 rows) ``` You can also use `lead()` to access values from rows further ahead by specifying an offset. For example, to compute the net return on investment for a stock ticker over each 2-year period: ```sql WITH stock_prices AS ( SELECT 'AAPL' AS ticker, date '2018-01-01' AS price_date, 41.54 AS price UNION ALL SELECT 'AAPL' AS ticker, date '2019-01-01' AS price_date, 39.48 AS price UNION ALL SELECT 'AAPL' AS ticker, date '2020-01-01' AS price_date, 74.60 AS price UNION ALL SELECT 'AAPL' AS ticker, date '2021-01-01' AS price_date, 131.96 AS price UNION ALL SELECT 'AAPL' AS ticker, date '2022-01-01' AS price_date, 182.01 AS price UNION ALL SELECT 'AAPL' AS ticker, date '2023-01-01' AS price_date, 129.93 AS price ) SELECT ticker, price_date, price, lead(price, 2) OVER (PARTITION BY ticker ORDER BY price_date) AS price_2_years_later, round(100.0 * (lead(price, 2) OVER (PARTITION BY ticker ORDER BY price_date) - price) / price, 2) AS two_year_return_pct FROM stock_prices; ``` This query calculates the price of each stock ticker 2 years later (`price_2_years_later`) and the percentage return on investment (`two_year_return_pct`) for each ticker. The `OVER` clause partitions the data by `ticker` and orders it by `price_date` within each partition. ```text ticker | price_date | price | price_2_years_later | two_year_return_pct --------+------------+--------+---------------------+--------------------- AAPL | 2018-01-01 | 41.54 | 74.60 | 79.59 AAPL | 2019-01-01 | 39.48 | 131.96 | 234.25 AAPL | 2020-01-01 | 74.60 | 182.01 | 143.98 AAPL | 2021-01-01 | 131.96 | 129.93 | -1.54 AAPL | 2022-01-01 | 182.01 | | AAPL | 2023-01-01 | 129.93 | | (6 rows) ``` ## Advanced examples ### Using `lead()` with a default value When the offset in `lead()` goes beyond the end of the window frame, it returns null by default. You can specify a default value to use instead, so the resulting column does not contain nulls. ```sql WITH tasks AS ( SELECT 1 AS project_id, 1 AS task_id, date '2023-01-01' AS start_date, date '2023-01-05' AS end_date UNION ALL SELECT 1 AS project_id, 2 AS task_id, date '2023-01-07' AS start_date, date '2023-01-10' AS end_date UNION ALL SELECT 1 AS project_id, 3 AS task_id, date '2023-01-10' AS start_date, date '2023-01-15' AS end_date UNION ALL SELECT 2 AS project_id, 1 AS task_id, date '2023-01-01' AS start_date, date '2023-01-10' AS end_date UNION ALL SELECT 2 AS project_id, 2 AS task_id, date '2023-01-11' AS start_date, date '2023-01-20' AS end_date ) SELECT project_id, task_id, start_date, end_date, lead(start_date, 1, end_date) OVER (PARTITION BY project_id ORDER BY start_date) AS next_start_date FROM tasks; ``` This query determines the start date of the next task in each project. For the last task in each project, where there is no next start date, it uses the current task's end date as the default value. ```text project_id | task_id | start_date | end_date | next_start_date ------------+---------+------------+------------+----------------- 1 | 1 | 2023-01-01 | 2023-01-05 | 2023-01-07 1 | 2 | 2023-01-07 | 2023-01-10 | 2023-01-10 1 | 3 | 2023-01-10 | 2023-01-15 | 2023-01-15 2 | 1 | 2023-01-01 | 2023-01-10 | 2023-01-11 2 | 2 | 2023-01-11 | 2023-01-20 | 2023-01-20 (5 rows) ``` ### Using `lead()` with multiple partitions You can use `lead()` with multiple partitions to perform calculations within different groups of rows simultaneously. ```sql WITH readings AS ( SELECT 1 AS device_id, date '2023-01-01' AS reading_date, 25.5 AS temperature UNION ALL SELECT 1 AS device_id, date '2023-01-02' AS reading_date, 26.0 AS temperature UNION ALL SELECT 2 AS device_id, date '2023-01-01' AS reading_date, 22.1 AS temperature UNION ALL SELECT 1 AS device_id, date '2023-01-03' AS reading_date, 25.8 AS temperature UNION ALL SELECT 2 AS device_id, date '2023-01-02' AS reading_date, 21.9 AS temperature ) SELECT device_id, reading_date, temperature, lead(temperature) OVER (PARTITION BY device_id ORDER BY reading_date) AS next_temperature, lead(temperature) OVER (PARTITION BY device_id ORDER BY reading_date) - temperature AS temperature_change FROM readings; ``` This query calculates the next temperature reading (`next_temperature`) and the change in temperature (`temperature_change`) for each device. The `OVER` clause partitions the data by `device_id` and orders it by `reading_date` within each partition, allowing the analysis to be performed separately for each device. ```text device_id | reading_date | temperature | next_temperature | temperature_change -----------+--------------+-------------+------------------+-------------------- 1 | 2023-01-01 | 25.5 | 26.0 | 0.5 1 | 2023-01-02 | 26.0 | 25.8 | -0.2 1 | 2023-01-03 | 25.8 | | 2 | 2023-01-01 | 22.1 | 21.9 | -0.2 2 | 2023-01-02 | 21.9 | | (5 rows) ``` ## Additional considerations ### Correctness The `lead()` function relates each row in the result set to a subsequent row in the same window frame. If the window frame is not explicitly defined, the default frame is the entire partition or result set. Make sure to specify the correct `ORDER BY` and `PARTITION BY` clauses to ensure the desired behavior. ### Performance implications Window functions like `lead()` perform calculations across a set of rows defined by the `OVER` clause. This can be computationally expensive, especially for large datasets or complex window definitions. To optimize performance, make sure to: - Include an `ORDER BY` clause in the `OVER` clause to avoid sorting the entire dataset. - Use partitioning (`PARTITION BY`) to divide the data into smaller chunks when possible. - Create appropriate indexes on the columns used in the `OVER` clause. ### Alternative functions - [lag](https://neon.com/docs/functions/window-lag) - Access values from previous rows in a result set. Similar to `lead()` but looks behind in the partition instead of ahead. - `first_value()` - Get the first value within a window frame. - `last_value()` - Get the last value within a window frame. ## Resources - [PostgreSQL documentation: Window functions](https://www.postgresql.org/docs/current/tutorial-window.html) - [PostgreSQL documentation: Lead function](https://www.postgresql.org/docs/current/functions-window.html#FUNCTIONS-WINDOW-TABLE) --- # Source: https://neon.com/llms/functions-window-rank.txt # Postgres rank() window function > The document explains the usage of the Postgres `rank()` window function within Neon, detailing how it assigns ranks to rows in a result set based on specified criteria. ## Source - [Postgres rank() window function HTML](https://neon.com/docs/functions/window-rank): The original HTML version of this documentation The `rank()` window function computes a ranking for each row within a partition of the result set. The rank is determined by the order of rows specified in the `ORDER BY` clause of the `OVER` clause. Rows with equal values for the ranking criteria receive the same rank, with the next rank(s) skipped. This function is useful in scenarios such as finding the top N rows per group, calculating percentiles, or generating leaderboards. ## Function signature The `rank()` function has the following form: ```sql rank() OVER ([PARTITION BY partition_expression] ORDER BY order_expression) ``` The `OVER` clause defines the window frame for the function. - The `ORDER BY` clause specifies the order in which ranks are assigned to rows. - The `PARTITION BY` clause is optional - if specified, it divides the result set into partitions and ranks are assigned within each partition. Otherwise, ranks are computed for each row over the entire result set. ## Example usage Consider an `employees` table with columns for employee ID, name, department, and salary. We can use `rank()` to rank employees within each department by their salary. ```sql WITH sample_data AS ( SELECT * FROM ( VALUES ('Alice', 'Sales', 50000), ('Bob', 'Marketing', 55000), ('Charlie', 'Sales', 52000), ('David', 'IT', 60000), ('Eve', 'Marketing', 55000), ('Frank', 'IT', 62000) ) AS t(employee_name, department, salary) ) SELECT employee_name, department, salary, RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS dept_salary_rank FROM sample_data ORDER BY department, dept_salary_rank; ``` This query ranks employees within each department based on their salary in descending order. Employees with the same salary within a department receive the same rank. ```text employee_name | department | salary | dept_salary_rank ---------------+------------+--------+------------------ Frank | IT | 62000 | 1 David | IT | 60000 | 2 Bob | Marketing | 55000 | 1 Eve | Marketing | 55000 | 1 Charlie | Sales | 52000 | 1 Alice | Sales | 50000 | 2 (6 rows) ``` ## Advanced examples ### Top N per group You can use `rank()` in a subquery to find the top N rows per group. ```sql WITH products AS ( SELECT * FROM ( VALUES (1, 'A', 100), (2, 'A', 80), (3, 'B', 200), (4, 'B', 180), (5, 'B', 150), (6, 'C', 120) ) AS t(product_id, category, price) ) SELECT * FROM ( SELECT product_id, category, price, rank() OVER (PARTITION BY category ORDER BY price DESC) AS rank FROM products ) ranked WHERE rank <= 2; ``` This query finds the top 2 most expensive products in each category. The subquery ranks products within each category by price, and the outer query filters for rows with a rank less than or equal to 2. ```text product_id | category | price | rank ------------+----------+-------+------ 1 | A | 100 | 1 2 | A | 80 | 2 3 | B | 200 | 1 4 | B | 180 | 2 6 | C | 120 | 1 (5 rows) ``` ### Percentile calculation You can calculate percentiles using the `rank()` function with some arithmetic. ```sql WITH scores AS ( SELECT * FROM ( VALUES ('Student 1', 85), ('Student 2', 92), ('Student 3', 78), ('Student 4', 90), ('Student 5', 88) ) AS t(student, score) ) SELECT student, score, rank() OVER (ORDER BY score) AS rank, round(100.0 * rank() OVER (ORDER BY score) / (SELECT count(*) FROM scores), 2) AS percentile FROM scores; ``` This query calculates the percentile rank for each student based on their score. The percentile is calculated by dividing the rank of each row by the total number of rows and multiplying by 100. ```text student | score | rank | percentile -----------+-------+------+------------ Student 3 | 78 | 1 | 20.00 Student 1 | 85 | 2 | 40.00 Student 5 | 88 | 3 | 60.00 Student 4 | 90 | 4 | 80.00 Student 2 | 92 | 5 | 100.00 (5 rows) ``` ## Alternative functions ### dense_rank The `dense_rank()` function is similar to `rank()`, but it does not skip ranks when there are ties. If multiple rows have the same rank, the next rank will be the next consecutive integer. ```sql WITH scores AS ( SELECT * FROM ( VALUES ('Player 1', 100), ('Player 2', 95), ('Player 3', 95), ('Player 4', 90) ) AS t(player, score) ) SELECT player, score, rank() OVER (ORDER BY score DESC) AS rank, dense_rank() OVER (ORDER BY score DESC) AS dense_rank FROM scores; ``` This query demonstrates the difference between `rank()` and `dense_rank()`. While `rank()` skips rank 3 due to the tie at rank 2, `dense_rank()` assigns consecutive ranks. ```text player | score | rank | dense_rank ----------+-------+------+------------ Player 1 | 100 | 1 | 1 Player 2 | 95 | 2 | 2 Player 3 | 95 | 2 | 2 Player 4 | 90 | 4 | 3 (4 rows) ``` ### row_number The `row_number()` function assigns a unique, sequential integer to each row within the partition of a result set. Unlike `rank()` and `dense_rank()`, it does not handle ties. ```sql WITH sales AS ( SELECT date '2023-01-01' AS sale_date, 1000 AS amount UNION ALL SELECT date '2023-01-01', 1500 UNION ALL SELECT date '2023-01-02', 1200 UNION ALL SELECT date '2023-01-02', 1200 ) SELECT sale_date, amount, row_number() OVER (PARTITION BY sale_date ORDER BY amount DESC) AS row_num FROM sales; ``` This query assigns a unique row number to each sale within a date, ordered by the sale amount descending. Even though there are ties for the date `2023-01-02`, each row receives a distinct row number. ```text sale_date | amount | row_num ------------+--------+--------- 2023-01-01 | 1500 | 1 2023-01-01 | 1000 | 2 2023-01-02 | 1200 | 1 2023-01-02 | 1200 | 2 (4 rows) ``` ## Additional considerations ### Handling ties The `rank()` and `dense_rank()` functions handle ties differently. `rank()` assigns the same rank to tied rows and skips the next rank(s), while `dense_rank()` assigns the same rank to tied rows but does not skip ranks. Choose the appropriate function based on your requirements. ### Performance implications Like other window functions, `rank()` performs calculations across a set of rows defined by the `OVER` clause. This can be computationally expensive, especially for large datasets or complex window definitions. To optimize performance: - Include an `ORDER BY` clause in the `OVER` clause to avoid sorting the entire dataset. - Use partitioning (`PARTITION BY`) to divide the data into smaller chunks when possible. - Create appropriate indexes on the columns used in the `OVER` clause. ## Resources - [PostgreSQL documentation: Window functions](https://www.postgresql.org/docs/current/functions-window.html) - [PostgreSQL documentation: Tutorial on window functions](https://www.postgresql.org/docs/current/tutorial-window.html) --- # Source: https://neon.com/llms/get-started-azure-get-started.txt # Get started with Neon on Azure > The document "Get started with Neon on Azure" guides users through the process of deploying and managing Neon databases on the Azure cloud platform, detailing setup, configuration, and integration steps specific to Azure's environment. ## Source - [Get started with Neon on Azure HTML](https://neon.com/docs/get-started/azure-get-started): The original HTML version of this documentation ## Find Neon on Azure and subscribe 1. Use the search in the [Azure portal](https://portal.azure.com/) to find the **Neon Serverless Postgres** offering. 2. Alternatively, go to the [Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home) and search for **Neon Serverless Postgres**. 3. Subscribe to the service. After subscribing, you will be directed to the [Create a Neon Resource](https://neon.com/docs/get-started/azure-get-started#create-a-neon-resource) page. ## Create a Neon Resource 1. On the **Create a Neon Serverless Postgres Resource** page, set the following values in the **Create Neon Resource** section. | Property | Description | | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Subscription** | From the drop-down, select your Azure subscription where you have Owner or Contributor access. | | **Resource group** | Specify whether you want to create a new Azure resource group or use an existing one. A resource group is like a container or a folder used to organize and manage resources in Azure. For more information, see [Azure Resource Group overview](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview). | | **Resource Name** | Enter a name for Neon resource. This name is used only in Azure. | | **Region** | Select a region to deploy your resource. Neon Serverless Postgres will let you choose a region while creating a project. | | **Neon Organization name** | The name you assign to your [Neon Organization](https://neon.com/docs/reference/glossary#organization), such as a team name or company name. The name you specify appears as your [Organization](https://neon.com/docs/reference/glossary#organization) name in the Neon Console. | | **Pricing Plan** | Select a plan. For information about Neon's plans, please refer to the [Neon Pricing](https://neon.com/pricing) page. Neon offers a Free plan and paid plans you can choose from. | **Note**: The Neon **Launch Plan** is currently not available in the Azure Marketplace. 2. After specifying the details above, select the **Next: Review + Create** to navigate to the final step for resource creation. When you get to the **Review + Create** page, review your selections and the Neon and Azure Marketplace terms and conditions. 3. Select **Overview** in the **Resource** menu to see information on the deployed resource. 4. Select the **Single-Sign-On** URL to redirect to the newly created Neon Organization. --- # Source: https://neon.com/llms/get-started-connect-neon.txt # Connecting Neon to your stack > The document "Connecting Neon to your stack" offers detailed instructions for integrating Neon with various development environments and tools, facilitating seamless connectivity and interaction with Neon's database services. ## Source - [Connecting Neon to your stack HTML](https://neon.com/docs/get-started/connect-neon): The original HTML version of this documentation Using Neon as the serverless database in your tech stack means configuring connections. Whether it's a direct connection string from your language or framework, setting environment variables for your deployment platform, connecting to ORMs like Prisma, or configuring deployment settings for CI/CD workflows, it starts with the connection. ## Connecting to your application This section provides connection string samples for various frameworks and languages, helping you integrate Neon into your tech stack. Tab: psql ```bash # psql example connection string psql postgresql://username:password@hostname:5432/database?sslmode=require&channel_binding=require ``` Tab: .env ```ini # .env example PGHOST=hostname PGDATABASE=database PGUSER=username PGPASSWORD=password PGPORT=5432 ``` Tab: Next.js ```javascript // Next.js example import postgres from 'postgres'; let { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const conn = postgres({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: 'require', }); function selectAll() { return conn.query('SELECT * FROM hello_world'); } ``` Tab: Drizzle ```javascript // Drizzle example with the Neon serverless driver import { neon } from '@neondatabase/serverless'; import { drizzle } from 'drizzle-orm/neon-http'; const sql = neon(process.env.DATABASE_URL); const db = drizzle(sql); const result = await db.select().from(...); ``` Tab: Prisma ```javascript // Prisma example with the Neon serverless driver import { neon } from '@neondatabase/serverless'; import { PrismaNeonHTTP } from '@prisma/adapter-neon'; import { PrismaClient } from '@prisma/client'; const sql = neon(process.env.DATABASE_URL); const adapter = new PrismaNeonHTTP(sql); const prisma = new PrismaClient({ adapter }); ``` Tab: Python ```python # Python example with psycopg2 import os import psycopg2 # Load the environment variable database_url = os.getenv('DATABASE_URL') # Connect to the PostgreSQL database conn = psycopg2.connect(database_url) with conn.cursor() as cur: cur.execute("SELECT version()") print(cur.fetchone()) # Close the connection conn.close() ``` Tab: .NET ```csharp # .NET example ## Connection string "Host=ep-cool-darkness-123456.us-east-2.aws.neon.tech;Database=dbname;Username=alex;Password=AbC123dEf" ## with SSL "Host=ep-cool-darkness-123456.us-east-2.aws.neon.tech;Database=dbname;Username=alex;Password=AbC123dEf;SSL Mode=Require;Trust Server Certificate=true" ## Entity Framework (appsettings.json) { ... "ConnectionStrings": { "DefaultConnection": "Host=ep-cool-darkness-123456.us-east-2.aws.neon.tech;Database=dbname;Username=alex;Password=AbC123dEf;SSL Mode=Require;Trust Server Certificate=true" }, ... } ``` Tab: Ruby ```ruby # Ruby example require 'pg' require 'dotenv' # Load environment variables from .env file Dotenv.load # Connect to the PostgreSQL database using the environment variable conn = PG.connect(ENV['DATABASE_URL']) # Execute a query conn.exec("SELECT version()") do |result| result.each do |row| puts "Result = #{row['version']}" end end # Close the connection conn.close ``` Tab: Rust ```rust // Rust example use postgres::Client; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::error; use std::env; use dotenv::dotenv; fn main() -> Result<(), Box> { // Load environment variables from .env file dotenv().ok(); // Get the connection string from the environment variable let conn_str = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let mut client = Client::connect(&conn_str, connector)?; for row in client.query("select version()", &[])? { let ret: String = row.get(0); println!("Result = {}", ret); } Ok(()) } ``` Tab: Go ```go // Go example package main import ( "database/sql" "fmt" "log" "os" _ "github.com/lib/pq" "github.com/joho/godotenv" ) func main() { err := godotenv.Load() if err != nil { log.Fatalf("Error loading .env file: %v", err) } connStr := os.Getenv("DATABASE_URL") if connStr == "" { panic("DATABASE_URL environment variable is not set") } db, err := sql.Open("postgres", connStr) if err != nil { panic(err) } defer db.Close() var version string if err := db.QueryRow("select version()").Scan(&version); err != nil { panic(err) } fmt.Printf("version=%s\n", version) } ``` ## Obtaining connection details When connecting to Neon from an application or client, you connect to a database in your Neon project. In Neon, a database belongs to a branch, which may be the default branch of your project (`production`) or a child branch. You can obtain the database connection details you require by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a compute, a database, and a role. A connection string is constructed for you. Neon supports pooled and direct connections to the database. Use a pooled connection string if your application uses a high number of concurrent connections. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling#connection-pooling). A Neon connection string includes the role, password, hostname, and database name. ```text postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ^ ^ ^ ^ ^ role -| | |- hostname |- pooler option |- database | |- password ``` **Note**: The hostname includes the ID of the compute, which has an `ep-` prefix: `ep-cool-darkness-a1b2c3d4`. For more information about Neon connection strings, see [Connection string](https://neon.com/docs/reference/glossary#connection-string). ## Using connection details You can use the details from the connection string or the connection string itself to configure a connection. For example, you might place the connection details in an `.env` file, assign the connection string to a variable, or pass the connection string on the command-line. ### `.env` file ```text PGUSER=alex PGHOST=ep-cool-darkness-a1b2c3d4.us-east-2.aws.neon.tech PGDATABASE=dbname PGPASSWORD=AbC123dEf PGPORT=5432 ``` ### Variable ```text DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` ### Command-line ```bash psql postgresql://alex:AbC123dEf@ep-cool-darkness-a1b2c3d4.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Note**: Neon requires that all connections use SSL/TLS encryption, but you can increase the level of protection by appending an `sslmode` parameter setting to your connection string. For instructions, see [Connect to Neon securely](https://neon.com/docs/connect/connect-securely). ## FAQs ### Where do I obtain a password? It's included in your Neon connection string, which you can find by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ### What port does Neon use? Neon uses the default Postgres port, `5432`. ## Network protocol support Neon projects provisioned on AWS support both [IPv4](https://en.wikipedia.org/wiki/Internet_Protocol_version_4) and [IPv6](https://en.wikipedia.org/wiki/IPv6) addresses. Neon projects provisioned on Azure currently only support IPv4. Additionally, Neon provides a serverless driver that supports both WebSocket and HTTP connections. For further information, refer to our [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) documentation. ## Connection notes - Some older client libraries and drivers, including older `psql` executables, are built without [Server Name Indication (SNI)](https://neon.com/docs/reference/glossary#sni) support and require a workaround. For more information, see [Connection errors](https://neon.com/docs/connect/connection-errors). - Some Java-based tools that use the pgJDBC driver for connecting to Postgres, such as DBeaver, DataGrip, and CLion, do not support including a role name and password in a database connection string or URL field. When you find that a connection string is not accepted, try entering the database name, role, and password values in the appropriate fields in the tool's connection UI --- # Source: https://neon.com/llms/get-started-dev-experience.txt # Developer experience with Neon > The document outlines the developer experience with Neon, detailing the setup process, integration capabilities, and tools available for efficient database management and development within the Neon environment. ## Source - [Developer experience with Neon HTML](https://neon.com/docs/get-started/dev-experience): The original HTML version of this documentation Discover how Neon's features can streamline your development process, reduce risks, and enhance productivity, helping you to ship faster with confidence. ## Developer velocity with database branching workflows **Branch your data like code for local and preview development workflows.** Neon's branching feature lets you branch your data like you branch code. Neon branches are full database copies, including both schema and data — but we also support [schema-only branches](https://neon.com/docs/guides/branching-schema-only#schema-only-branching-example) for those working with sensitive data. You can instantly create database branches for integration with your development workflows. You can build your database branching workflows using the [Neon CLI](https://neon.com/docs/reference/neon-cli), [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), or [GitHub Actions](https://neon.com/docs/guides/branching-github-actions). For example, this example shows how to create a development branch from `production` with a simple CLI command: ```bash neon branches create --name feature/user-auth ``` Neon's copy-on-write technique makes branching instantaneous and cost-efficient. Whether your database is 1 GB or 1 TiB, [it only takes seconds to create a branch](https://neon.com/blog/how-to-copy-large-postgres-databases-in-seconds), and Neon's branches are full database copies by default — with schema-only as an option. Also, with Neon, you can easily keep your development branches up-to-date by resetting your schema and data to the latest from `production` with a simple command. ```bash neon branches reset feature/user-auth --parent ``` No more time-consuming restore operations when you need a fresh database copy. You can use branching with deployment platforms such as Vercel to create a database branch for each preview deployment. If you'd rather not build your own branching workflow, you can use the [Neon-managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration) to set one up in just a few clicks. To learn more, read [Database Branching Workflows](https://neon.com/branching), and the [Database branching workflow guide for developers](https://neon.com/blog/database-branching-workflows-a-guide-for-developers). **Tip** Compare database branches with Schema Diff: Neon's Schema Diff tool lets you compare the schemas for two selected branches in a side-by-side view. For more, see [Schema Diff](https://neon.com/docs/guides/schema-diff). ## Instant restore **Instant restore with time travel** We've all heard about multi-hour outages and data losses due to errant queries or problematic migrations. Neon's [Instant restore](https://neon.com/docs/guides/branch-restore) feature allows you to instantly restore your data to a point in time before the issue occurred. With Neon, you can perform a restore operation in a few clicks, letting you get back online in the time it takes to choose a restore point, which can be a date and time or a [Log Sequence Number (LSN)](https://neon.com/docs/reference/glossary#lsn). To help you find the correct restore point, Neon provides a [Time Travel Assist](https://neon.com/docs/guides/time-travel-assist) feature that lets you connect to any selected time or LSN within your database history and run queries. Time Travel Assist is designed to work in tandem with Neon's restore capability to facilitate precise and informed restore operations. ## Low-latency connections **Connect from Edge and serverless environments.** The [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver), which currently has over [300K weekly downloads](https://www.npmjs.com/package/@neondatabase/serverless), is a low-latency Postgres driver designed for JavaScript and TypeScript applications. It enables you to query data from edge and serverless environments like **Vercel Edge Functions** or **Cloudflare Workers** over HTTP or WebSockets instead of TCP. This capability is particularly useful for achieving reduced query latencies, with the potential to achieve [sub-10ms Postgres query times](https://neon.com/blog/sub-10ms-postgres-queries-for-vercel-edge-functions) when querying from Edge or serverless functions. But don't take our word for it. Try it for yourself with Vercel's [Functions + Database Latency app](https://db-latency.vercel.app/). This graph shows latencies for Neon's serverless driver: ## Postgres extension support **No database is more extensible than Postgres.** Postgres extensions are add-ons that enhance the functionality of Postgres, letting you tailor your Postgres database to your specific requirements. They offer features ranging from advanced indexing and data types to geospatial capabilities and analytics, allowing you to significantly expand the native capabilities of Postgres. Some of the more popular Postgres extensions include: - **PostGIS**: Adds support for geographic objects, turning PostgreSQL into a spatial database. - **pg_stat_statements**: Tracks execution statistics of all SQL queries for performance tuning. - **pg_partman**: Simplifies partition management, making it easier to maintain time-based or serial-based table partitions. - **pg_trgm**: Provides fast similarity search using trigrams, ideal for full-text search. - **hstore**: Implements key-value pairs for semi-structured data storage. - **plpgsql**: Enables procedural language functions with PL/pgSQL scripting. - **pgcrypto**: Offers cryptographic functions, including data encryption and decryption. - **pgvector**: Brings vector similarity search to Postgres for building AI applications. These are just a few of the extensions supported by Neon. Explore all supported extensions [here](https://neon.com/docs/extensions/pg-extensions). Extensions can be installed with a simple `CREATE EXTENSION` command from Neon's [SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client; for example: ```sql CREATE EXTENSION pgcrypto; ``` ## Build your AI applications with Postgres **Why pay for a specialized vector database service when you can just use Postgres?** Neon supports the [pgvector](https://neon.com/docs/extensions/pgvector) Postgres extension for storing and retrieving vector embeddings within your Postgres database. This feature is essential for building next-generation AI applications, enabling operations like fast and accurate similarity search, information retrieval, and recommendation systems directly in Postgres. Why pay for or add the complexity of a specialized vector database service when you have leading-edge capabilities in Postgres? Neon's own **Ask Neon AI** chat, built in collaboration with [InKeep](https://inkeep.com/), uses Neon with [pgvector](https://neon.com/docs/extensions/pgvector). For more, see [Powering next gen AI apps with Postgres](https://neon.com/docs/ai/ai-intro). ## Database DevOps with Neon's CLI, API, and GitHub Actions **Neon is built for DevOps. Use our CLI, API, or GitHub Actions to build your CI/CD pipelines.** - **Neon CLI** With the [Neon CLI](https://neon.com/docs/reference/neon-cli), you can integrate Neon with development tools and CI/CD pipelines to enhance your development workflows, reducing the friction associated with database-related operations like creating projects, databases, and branches. Once you have your connection string, you can manage your entire Neon database from the command line. This makes it possible to quickly set up deployment pipelines using GitHub Actions, GitLab CI/CD, or Vercel Preview Environments. These operations and pipelines can also be treated as code and live alongside your applications as they evolve and mature. ```bash neon branches create --name feature/user-auth ``` - **Neon API** The [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) is a REST API that enables you to manage your Neon projects programmatically. It provides resource-oriented URLs, accepts request bodies, returns JSON responses, and uses standard HTTP response codes. This API allows for a wide range of operations, enabling automation management of various aspects of Neon, including projects, branches, computes, databases, and roles. Like the Neon CLI, you can use the Neon API for seamless integration of Neon's capabilities into automated workflows, CI/CD pipelines, and developer tools. Give it a try using our [interactive Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/ancient-rice-43775340/branches \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "branch": { "name": "dev/alex" }, "endpoints": [ { "type": "read_write" } ] } ' ``` - **GitHub Actions** Neon provides the GitHub Actions for working with database branches, which you can add to your CI workflows. To learn more, see [Automate branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions). ```yaml name: Create Neon Branch with GitHub Actions Demo run-name: Create a Neon Branch 🚀 jobs: Create-Neon-Branch: uses: neondatabase/create-branch-action@v5 with: project_id: rapid-haze-373089 # optional (defaults to your project's default branch) parent: dev # optional (defaults to neondb) database: my-database branch_name: from_action_reusable username: db_user_for_url api_key: ${{ secrets.NEON_API_KEY }} id: create-branch - run: echo db_url ${{ steps.create-branch.outputs.db_url }} - run: echo host ${{ steps.create-branch.outputs.host }} - run: echo branch_id ${{ steps.create-branch.outputs.branch_id }} ``` --- # Source: https://neon.com/llms/get-started-frameworks.txt # Neon framework guides > The Neon framework guides document outlines the steps for integrating various frameworks with Neon, enabling users to efficiently connect and manage their applications within the Neon ecosystem. ## Source - [Neon framework guides HTML](https://neon.com/docs/get-started/frameworks): The original HTML version of this documentation - [Node.js](https://neon.com/docs/guides/node): Connect a Node.js application to Neon - [Next.js](https://neon.com/docs/guides/nextjs): Connect a Next.js application to Neon - [NestJS](https://neon.com/docs/guides/nestjs): Connect a NestJS application to Neon - [Astro](https://neon.com/docs/guides/astro): Connect an Astro site or app to Neon - [Django](https://neon.com/docs/guides/django): Connect a Django application to Neon - [Entity Framework](https://neon.com/docs/guides/dotnet-entity-framework): Connect a Dotnet Entity Framework application to Neon - [Express](https://neon.com/docs/guides/express): Connect an Express application to Neon - [Hono](https://neon.com/docs/guides/hono): Connect a Hono application to Neon - [Laravel](https://neon.com/docs/guides/laravel): Connect a Laravel application to Neon - [Micronaut Kotlin](https://neon.com/docs/guides/micronaut-kotlin): Connect a Micronaut Kotlin application to Neon - [Nuxt](https://neon.com/docs/guides/nuxt): Connect a Nuxt application to Neon - [OAuth](https://neon.com/docs/guides/oauth-integration): Integrate with Neon using OAuth - [Phoenix](https://neon.com/docs/guides/phoenix): Connect a Phoenix site or app to Neon - [Quarkus](https://neon.com/docs/guides/quarkus-jdbc): Connect Quarkus (JDBC) to Neon - [Quarkus](https://neon.com/docs/guides/quarkus-reactive): Connect Quarkus (Reactive) to Neon - [React](https://neon.com/docs/guides/react): Connect a React application to Neon - [RedwoodSDK](https://neon.com/docs/guides/redwoodsdk): Connect a RedwoodSDK application to Neon - [Reflex](https://neon.com/docs/guides/reflex): Build Python Apps with Reflex and Neon - [Remix](https://neon.com/docs/guides/remix): Connect a Remix application to Neon - [Ruby on Rails](https://neon.com/docs/guides/ruby-on-rails): Connect a Ruby on Rails application to Neon - [Symfony](https://neon.com/docs/guides/symfony): Connect from Symfony with Doctrine to Neon - [SolidStart](https://neon.com/docs/guides/solid-start): Connect a SolidStart site or app to Neon - [SQLAlchemy](https://neon.com/docs/guides/sqlalchemy): Connect a SQLAlchemy application to Neon - [Sveltekit](https://neon.com/docs/guides/sveltekit): Connect a Sveltekit application to Neon - [Vue](https://neon.com/docs/guides/vue): Connect a Vue.js application to Neon --- # Source: https://neon.com/llms/get-started-languages.txt # Neon language guides > The Neon language guides document outlines the steps for integrating Neon with various programming languages, detailing configuration and connection procedures specific to each language. ## Source - [Neon language guides HTML](https://neon.com/docs/get-started/languages): The original HTML version of this documentation - [.NET](https://neon.com/docs/guides/dotnet-npgsql): Connect a .NET (C#) application to Neon - [Elixir](https://neon.com/docs/guides/elixir): Connect an Elixir application to Neon - [Go](https://neon.com/docs/guides/go): Connect a Go application to Neon - [Java](https://neon.com/docs/guides/java): Connect a Java application to Neon - [JavaScript](https://neon.com/docs/guides/javascript): Connect a JavaScript application to Neon - [Python](https://neon.com/docs/guides/python): Connect a Python application to Neon - [Rust](https://neon.com/docs/guides/rust): Connect a Rust application to Neon --- # Source: https://neon.com/llms/get-started-orms.txt # Neon ORM guides > The Neon ORM guides document outlines the steps for integrating various Object-Relational Mappers (ORMs) with Neon databases, detailing configuration and connection processes specific to Neon's environment. ## Source - [Neon ORM guides HTML](https://neon.com/docs/get-started/orms): The original HTML version of this documentation - [Django](https://neon.com/docs/guides/django): Connect a Django application to Neon - [Drizzle](https://neon.com/docs/guides/drizzle): Learn how to use Drizzle ORM with your Neon Postgres database (Drizzle docs) - [Elixir Ecto](https://neon.com/docs/guides/elixir-ecto): Connect from Elixir with Ecto to Neon - [Laravel](https://neon.com/docs/guides/laravel): Connect a Laravel application to Neon - [Prisma](https://neon.com/docs/guides/prisma): Learn how to connect from Prisma ORM to your Neon Postgres database - [Rails](https://neon.com/docs/guides/ruby-on-rails): Connect a Rails application to Neon - [SQLAlchemy](https://neon.com/docs/guides/sqlalchemy): Connect a SQLAlchemy application to Neon - [TypeORM](https://neon.com/docs/guides/typeorm): Connect a TypeORM application to Neon --- # Source: https://neon.com/llms/get-started-production-checklist.txt # Getting ready for production > The "Getting ready for production" document outlines a checklist for Neon users to prepare their databases for production environments, covering essential steps such as configuring backups, monitoring, and security settings. ## Source - [Getting ready for production HTML](https://neon.com/docs/get-started/production-checklist): The original HTML version of this documentation ## Production checklist - [ ] [1. Set a compute size that can handle production traffic](https://neon.com/docs/get-started/production-checklist#set-a-compute-size-that-can-handle-production-traffic) Make sure your default branch can handle production traffic. A higher minimum compute can help you avoid performance bottlenecks. - [ ] [2. Enable autoscaling to handle usage spikes](https://neon.com/docs/get-started/production-checklist#enable-autoscaling-to-handle-usage-spikes) Set your compute to automatically scale up, allowing your app to handle traffic surges and stay performant without manual scaling. - [ ] [3. Disable scale to zero](https://neon.com/docs/get-started/production-checklist#disable-scale-to-zero) Scale to zero turns off your compute after a period of inactivity. Ideal for development or other environments with bursty usage. - [ ] [4. Use a pooled connection](https://neon.com/docs/get-started/production-checklist#use-a-pooled-connection) Increase your database's ability to handle concurrent connections by using connection pooling. - [ ] [5. Increase your project's restore window to 7 days](https://neon.com/docs/get-started/production-checklist#increase-your-projects-restore-window-to-7-days) Protect your production data from accidental loss. Keep at least a 7-day restore window for quick data recovery and analysis. - [ ] [6. Restrict database access to trusted IPs](https://neon.com/docs/get-started/production-checklist#restrict-database-access-to-trusted-ips) Secure your database by limiting connections to trusted IP addresses. - [ ] [7. Set up metrics export](https://neon.com/docs/get-started/production-checklist#set-up-metrics-export) Export Neon metrics to Datadog or any OTEL-compatible platform like Grafana Cloud or New Relic to centralize database monitoring with your existing observability stack. - [ ] [8. Install pg_stat_statements](https://neon.com/docs/get-started/production-checklist#install-pgstatstatements) Enable query performance monitoring to track execution times and frequency. - [ ] [9. Ensure your app reconnects after your database restarts](https://neon.com/docs/get-started/production-checklist#ensure-your-app-reconnects-after-your-database-restarts) Verify your application handles compute restarts gracefully. - [ ] [10. Upgrade to get priority support](https://neon.com/docs/get-started/production-checklist#upgrade-to-a-neon-business-scale-for-priority-support) Get support for your production database with a Scale plan. - [ ] [11. Advanced: Set up cross-region replication](https://neon.com/docs/get-started/production-checklist#advanced-set-up-cross-region-replication) For added resilience, replicate your data to a Neon project in another region. This helps prepare for regional outages, making it possible to failover to a copy of your database in a different region, if necessary. ## Set a compute size that can handle production traffic Before your application goes to production, make sure your database has enough vCPU and memory to handle expected production load. See [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). **Recommendation** We recommend that you **fit your data in memory** and use Neon **autoscaling**: - Start with a compute size that can hold all your data in memory. Or try to fit at least your most frequently accessed data (your [working set](https://neon.com/docs/reference/glossary#working-set)). - Once you determine the [right size](https://neon.com/docs/manage/computes#how-to-size-your-compute) for your compute, use that as the **minimum compute size** for [Autoscaling](https://neon.com/docs/get-started/production-checklist#set-maximum-compute-to-highest-cu)). **About compute size** A Compute Unit (CU) in Neon measures the processing power or "size" of a Neon compute. One CU includes 1 vCPU and 4 GB of RAM. Neon computes can range from **0.25** CUs to **56** CUs, depending on your [Neon plan](https://neon.com/docs/introduction/plans). ## Enable autoscaling to handle usage spikes Use Neon's [autoscaling](https://neon.com/docs/guides/autoscaling-algorithm) feature to dynamically adjust your compute resources based on your current workload. This means you don't need to scale manually during traffic surges. **Recommendation** - **Minimum compute size**: Autoscaling works best if your data can be fully cached in memory. - **Maximum compute size**: Set this to as a high a limit as your plan allows. You only pay for what you use. To get started with Autoscaling, read: - [Enable Autoscaling in Neon](https://neon.com/docs/guides/autoscaling-guide) - [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute), including the [Autoscaling considerations](https://neon.com/docs/manage/computes#autoscaling-considerations) section. ## Disable scale to zero Scale to zero turns off your compute after a period of inactivity. Ideal for development or other environments with bursty usage. **Recommendation** Disable scale to zero for production workloads. This ensures your compute is always active, preventing delays and session context resets caused by cold starts. **Session and latency considerations** By default, your compute scales to zero after 5 minutes. Restarts are nearly instant, but there may still be some latency (around 500 milliseconds depending on the region). Restarts will reset your session context, affecting in-memory statistics and temporary tables. While typical production loads might never go idle long enough to scale to zero, disabling this feature prevents any possible issues from cold starts or session loss. Disabling scale to zero is available on paid plans only. See [Configuring Scale to Zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide) for more detail. ## Use a pooled connection Connection pooling with [PgBouncer](https://www.pgbouncer.org/) allows your database to handle up to 10,000 concurrent connections, reducing connection overhead and improving performance. **Recommendation** For production environments, enable connection pooling. This increases the number of simultaneous connections your database can handle and optimizes resource usage. **Connection details** Use a pooled connection string by adding `-pooler` to your endpoint ID, or simply copy the pooled connection string from the **Connect** widget in your **Project Dashboard**. Use this string as your database connection in your application's environment variables. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Example connection string: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ## Increase your project's restore window to 7 days Neon retains a history of changes for all branches, enabling instant restore and time travel queries. This history acts as a backup strategy, allowing recovery of lost data and viewing past database states. **Recommendation** Set your restore window to 7 days to ensure data integrity and quick recovery. **Restore window details** By default, Neon's restore window is set to **1 day**. Extending it to 7 days helps protect you against data loss, letting you recover from human or application errors that may go unnoticed for days. It can also help you comply with any industry regulations that need longer retention periods. While a longer restore window can increase storage costs, it provides exta security and recoverability for production data. For more info, see [Instant restore](https://neon.com/docs/introduction/branch-restore). ## Restrict database access to trusted IPs Neon's IP Allow feature ensures that only trusted IP addresses can connect to your database, preventing unauthorized access and enhancing security. **Recommendation** Combine an allowlist with protected branches for enhanced security. This setup ensures that only trusted IPs can access critical data, reducing the risk of unauthorized access and safeguarding data integrity. **Configuration details** - **IP Allow**: Restricts access to specific, trusted IP addresses, preventing unauthorized connections. - **Protected branch**: Safeguards critical data from accidental deletions or modifications by designating branches as protected. You can configure **IP Allow** and protected branches in your Neon project's settings. For more information, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow) and [Protected branches guide](https://neon.com/docs/guides/protected-branches). ## Set up metrics export Export Neon metrics to your preferred observability platform and centralize your database monitoring with your existing stack. **Recommendation** Set up integration with your observability platform to monitor and set alerts for key metrics: - Connection counts (active and idle database connections) - Database size (total size of all databases) - Replication delay (lag in bytes and seconds) - Compute metrics (CPU and memory usage) **Available integrations:** - **[Grafana Cloud](https://neon.com/docs/guides/grafana-cloud)**: Native OTLP integration with automatic routing to Mimir, Loki, and Tempo - **[Datadog](https://neon.com/docs/guides/datadog)**: Direct integration with Datadog's observability platform - **[OpenTelemetry](https://neon.com/docs/guides/opentelemetry)**: Export to any OTLP-compatible platform, including self-hosted Grafana and Tempo Choose the platform that best fits your existing monitoring infrastructure. ## Ensure your app reconnects after your database restarts Verify your application handles compute restarts gracefully. Neon occasionally restarts computes for updates and maintenance. **Recommendation** Most database drivers and connection pools handle reconnection automatically, but it's important to test this behavior. You can use the Neon API to trigger a restart and watch your application reconnect: ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/your_project_id/endpoints/your_endpoint_id/restart \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` See [Restart compute endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) for details. For more information: - [Build connection timeout handling into your application](https://neon.com/docs/connect/connection-latency#build-connection-timeout-handling-into-your-application) - [Maintenance & updates overview](https://neon.com/docs/manage/maintenance-updates-overview) ## Install pg_stat_statements Enable query performance monitoring to track execution times and frequency. **Recommendation** Install the `pg_stat_statements` extension to monitor query performance and identify potential bottlenecks. **Usage** ```sql CREATE EXTENSION IF NOT EXISTS pg_stat_statements; ``` The statistics gathered by this extension require little overhead and let you quickly access metrics like: - [Most frequently executed queries](https://neon.com/docs/postgresql/query-performance#most-frequently-executed-queries) - [Longest running queries](https://neon.com/docs/postgresql/query-performance#long-running-queries) - [Queries that return the most rows](https://neon.com/docs/postgresql/query-performance#queries-that-return-the-most-rows) You can also use the **Monitoring Dashboard** in the Neon Console to view live graphs for system and database metrics like CPU, RAM, and connections. For more information, see [Query performance](https://neon.com/docs/postgresql/query-performance) and [Monitoring](https://neon.com/docs/introduction/monitoring). ## Upgrade to a Neon Scale plan for priority support Scale plan customers can open support tickets with the Neon Support team. **Recommendation** Upgrade to a [Scale plan](https://neon.com/docs/introduction/plans) to get both [priority support](https://neon.com/docs/introduction/support#prioritized-support-tickets) and acccess to SLAs. For more information, see the [Support documentation](https://neon.com/docs/introduction/support). ## Advanced: Set up cross-region replication Cross-region replication can provide an added layer of resilience for production environments. It allows you to replicate data from one Neon project to another in a different region — helping you prepare for unlikely regional outages or implement failover strategies. **Recommendation** Set up cross-region replication if your app requires high availability across regions or if you're building a disaster recovery plan. **How it works** Neon uses [logical replication](https://neon.com/docs/guides/logical-replication-neon-to-neon) to replicate data between Neon projects. You can replicate from a source project in one region to a destination project in another region, creating a near real-time copy of your data. **Steps to get started** - Set up a publication on your source database - Create matching tables and a subscription on your destination database - Test the replication and monitor for consistency For full details, see [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). --- # Source: https://neon.com/llms/get-started-production-readiness.txt # Production readiness with Neon > The document "Production readiness with Neon" outlines the necessary steps and considerations for preparing Neon databases for production environments, including configuration, monitoring, and scaling practices specific to Neon's infrastructure. ## Source - [Production readiness with Neon HTML](https://neon.com/docs/get-started/production-readiness): The original HTML version of this documentation Learn how autoscaling, scale to zero, Neon's storage architecture, change data capture, read replicas, and support for thousands of connections can improve performance, reliability, and efficiency for your production environments. ## Autoscaling **Automatically scale to meet demand.** Neon's autoscaling feature automatically and transparently scales up compute resources on demand in response to your application workload and scales down during periods of inactivity. What does this mean for you? - **You are always ready for an increased load**. Enable autoscaling and stop worrying about occasional traffic spikes. - **You can stop paying for compute resources that you only use sometimes**. You no longer have to run a maximum potential load configuration at all times. - **No more manual scaling disruptions**. With autoscaling, you can focus more on your application and less on managing infrastructure. To learn more, see our [Autoscaling](https://neon.com/docs/introduction/autoscaling) guide. ## Scale to zero **Stop paying for idle databases.** Neon's _Scale to zero_ feature automatically transitions a Neon compute (where Postgres runs) to an idle state when it is not being used, effectively scaling it to zero to minimize compute usage and costs. **Why do you need a database that scales to zero?** Combined with Neon's branching capability, scale to zero allows you to instantly spin up databases for development, experimentation, or testing without the typical costs associated with "always-running" databases with relatively little usage. This approach is ideal for various scenarios: - **Non-production databases**: Development, staging, and testing environments benefit as developers can work on multiple instances without cost concerns since these databases only use resources when active. - **Internal apps**: These apps often experience downtime during off-hours or holidays. Scale to zero ensures that supporting databases pause during inactivity, cutting costs without affecting usage during active periods. - **Small projects**: Implementing scale to zero for these projects' databases enhances cost efficiency without significantly impacting user experience. Learn more about [why you want a database that scales to zero](https://neon.com/blog/why-you-want-a-database-that-scales-to-zero). ## A storage architecture built for the cloud **Efficient, performant, reliable storage** Neon's storage was built for high availability and durability. Every transaction is stored in multiple copies across availability zones and cloud object storage.Efficiency and performance are achieved through a multi-tier architecture designed to balance latency, throughput, and cost considerations. Neon storage is architected to integrate storage, backups, and archiving into one system to reduce operational headaches and administrative overhead associated with checkpoints, data backups, and restore. Neon uses cloud-based object storage solutions, such as Amazon S3, to relocate less frequently accessed data to the most cost-efficient storage option. For your most frequently accessed data, which requires rapid access and high throughput, Neon uses locally attached SSDs to ensure high performance and low latency. The entire Neon storage framework is developed in Rust for maximum performance and usability. Read about [how we scale an open source, multi-tenant storage engine for Postgres written in Rust](https://neon.com/blog/how-we-scale-an-open-source-multi-tenant-storage-engine-for-postgres-written-rust), or [take a deep dive into the Neon storage engine](/blog/get-page-a(/blognder, Heikki Linnakangas. ## Change Data Capture (CDC) with Logical Replication **Stream your data to external data platforms and services.** Neon's Logical Replication feature enables replicating data from your Neon database to external destinations, allowing for Change Data Capture (CDC) and real-time analytics. Stream your data to data warehouses, analytical database services, messaging platforms, event-streaming platforms, external Postgres databases, and more. To learn more, see [Get started with logical replication](https://neon.com/docs/guides/logical-replication-guide). You can also replicate data to Neon from other Postgres platforms and instances. See our [logical replication migration guides](https://neon.com/docs/guides/logical-replication-guide#replicate-data-to-neon) to get started. ## Scale with read replicas **Add read replicas to achieve instant scale.** Neon supports read replicas that let you instantly scale your application by offloading read-only workloads from your primary read-write compute. Create a read replica with the Neon CLI: ```bash neon branches create --name my_read_replica_branch --type read_only ``` To learn more, see [Read replicas](https://neon.com/docs/introduction/read-replicas). ## Support for thousands of connections **Add support for thousands of concurrent connections with a pooled connection string.** Neon's [connection pooling](https://neon.com/docs/connect/connection-pooling) feature supports up to 10,000 concurrent connections. Connection pooling works by caching and reusing database connections, which helps to significantly optimize resource usage and enhance performance. It reduces the overhead associated with establishing new connections and closing old ones, allowing applications to handle a higher volume of requests more efficiently. Neon uses [PgBouncer](https://www.pgbouncer.org/) to support connection pooling. Enabling connection pooling is easy. Just grab a pooled connection string from the console: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ## More Neon features For an overview of all the features that Neon supports, including security features, visit [Detailed Plan Comparison](https://neon.com/pricing#plans) on the [Neon Pricing](https://neon.com/pricing) page. --- # Source: https://neon.com/llms/get-started-query-with-neon-sql-editor.txt # Query with Neon's SQL Editor > The document "Query with Neon's SQL Editor" guides users on executing SQL queries using Neon's integrated SQL Editor, detailing steps for accessing the editor, running queries, and managing query results within the Neon platform. ## Source - [Query with Neon's SQL Editor HTML](https://neon.com/docs/get-started/query-with-neon-sql-editor): The original HTML version of this documentation The Neon SQL Editor allows you to run queries on your Neon databases directly from the Neon Console. In addition, the editor keeps a query history, permits saving queries, and provides [**Explain**](https://www.postgresql.org/docs/current/sql-explain.html) and [**Analyze**](https://www.postgresql.org/docs/current/using-explain.html#USING-EXPLAIN-ANALYZE) features. To use the SQL Editor: 1. Navigate to the [Neon Console](https://console.neon.tech/). 2. Select your project. 3. Select **SQL Editor**. 4. Select a branch and database. 5. Enter a query into the editor and click **Run** to view the results. You can use the following query to try the SQL Editor. The query creates a table, adds data, and retrieves the data from the table. ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); SELECT * FROM playing_with_neon; ``` Running multiple query statements at once returns a separate result set for each statement. The result sets are displayed in separate tabs, numbered in order of execution, as shown above. To clear the editor, click **New Query**. **Tip**: When querying objects such as tables and columns with upper case letters in their name, remember to enclose the identifier name in quotes. For example: `SELECT * FROM "Company"`. Postgres changes identifier names to lower case unless they are quoted. The same applies when creating objects in Postgres. For example, `CREATE TABLE DEPARTMENT(id INT)` creates a table named `department` in Postgres. For more information about how quoted and unquoted identifiers are treated by Postgres, see [Identifiers and Key Words](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS), in the _PostgreSQL documentation_. ## Save your queries The SQL Editor allows you to save your queries. To save a query: 1. Enter the query into the editor. 2. Click **Save** to open the **SAVE QUERY** dialog. 3. Enter a name for the query and click **Save**. The query is added to the **Saved** list in the left pane of the SQL Editor. You can rerun a query by selecting it from the **Saved** list. You can rename or delete a saved query by selecting **Rename** or **Delete** from the more options menu associated with the saved query. ## View the query history The SQL Editor maintains a query history for the project. To view your query history, select **History** in the left pane of the SQL Editor. You can click an item in the **History** list to view the query that was run. **Note**: Queries saved to **History** are limited to 9 KB in length. While you can execute longer queries from the SQL Editor, any query exceeding 9 KB will be truncated when saved. A `-- QUERY TRUNCATED` comment is added at the beginning of these queries to indicate truncation. Additionally, if you input a query longer than 9 KB in the Neon SQL Editor, a warning similar to the following will appear: `This query will still run, but the last 1234 characters will be truncated from query history`. ## Explain and Analyze The Neon SQL Editor provides **Explain** and **Analyze** features. - The **Explain** feature runs the specified query with the Postgres [EXPLAIN](https://www.postgresql.org/docs/current/sql-explain.html) command, which returns the execution plan for the query. The **Explain** feature only returns a plan with estimates. It does not execute the query. - The **Analyze** feature runs the specified query with [EXPLAIN ANALYZE](https://www.postgresql.org/docs/current/using-explain.html#USING-EXPLAIN-ANALYZE). The `ANALYZE` parameter causes the query to be executed and returns actual row counts and run times for plan nodes along with the `EXPLAIN` estimates. Understanding the information provided by the **Explain** and **Analyze** features requires familiarity with the Postgres [EXPLAIN](https://www.postgresql.org/docs/current/sql-explain.html) command and its `ANALYZE` parameter. Refer to the [EXPLAIN](https://www.postgresql.org/docs/current/sql-explain.html) documentation and the [Using EXPLAIN](https://www.postgresql.org/docs/current/using-explain.html) topic in the _PostgreSQL documentation_. ## Time Travel You can toggle Time Travel in the SQL Editor to switch from querying your current data to querying against a selected point within your [restore window](https://neon.com/docs/manage/projects#configure-restore-window). For more details about using Time Travel queries, see: - [Time Travel](https://neon.com/docs/guides/time-travel-assist) - [Time Travel tutorial](https://neon.com/docs/guides/time-travel-tutorial) ## Export data to CSV, JSON and XLSX The Neon SQL Editor supports exporting your data to `JSON`, `CSV` and `XLSX`. You can access the download button from the bottom right corner of the **SQL Editor** page. The download button only appears when there is a result set to download. ## Expand results section of the SQL Editor window You can expand the results section of the SQL Editor window by selecting the expand window button from the bottom right corner of the **SQL Editor** page. There must be query results to display, otherwise the expanded results section will appear blank. ## Meta-commands The Neon SQL Editor supports using Postgres meta-commands, which act like shortcuts for interacting with your database. If you are already familiar with using meta-commands from the `psql` command-line interface, you can use many of those same commands in the SQL Editor. ### Benefits of Meta-Commands Meta-commands can significantly speed up your workflow by providing quick access to database schemas and other critical information without needing to write full SQL queries. They are especially useful for database management tasks, making it easier to handle administrative duties directly from the Neon Console. ### Available meta-commands Here are some of the meta-commands that you can use within the Neon SQL Editor: - `\dt` — List all tables in the current database. - `\d [table_name]` — Describe a table's structure. - `\l` — List all databases. - `\?` - A cheat sheet of available meta-commands - `\h [NAME]` - Get help for any Postgres command. For example, try `\h SELECT`. Note that not all meta-commands are supported in the SQL Editor. To get a list of supported commands, use `\?`. Details: Example of supported commands ```bash Informational (options: S = show system objects, + = additional detail) \d[S+] list tables, views, and sequences \d[S+] NAME describe table, view, sequence, or index \da[S] [PATTERN] list aggregates \dA[+] [PATTERN] list access methods \dAc[+] [AMPTRN [TYPEPTRN]] list operator classes \dAf[+] [AMPTRN [TYPEPTRN]] list operator families \dAo[+] [AMPTRN [OPFPTRN]] list operators of operator families \dAp[+] [AMPTRN [OPFPTRN]] list support functions of operator families \db[+] [PATTERN] list tablespaces \dc[S+] [PATTERN] list conversions \dconfig[+] [PATTERN] list configuration parameters \dC[+] [PATTERN] list casts \dd[S] [PATTERN] show object descriptions not displayed elsewhere \dD[S+] [PATTERN] list domains \ddp [PATTERN] list default privileges \dE[S+] [PATTERN] list foreign tables \des[+] [PATTERN] list foreign servers \det[+] [PATTERN] list foreign tables \deu[+] [PATTERN] list user mappings \dew[+] [PATTERN] list foreign-data wrappers \df[anptw][S+] [FUNCPTRN [TYPEPTRN ...]] list [only agg/normal/procedure/trigger/window] functions \dF[+] [PATTERN] list text search configurations \dFd[+] [PATTERN] list text search dictionaries \dFp[+] [PATTERN] list text search parsers \dFt[+] [PATTERN] list text search templates \dg[S+] [PATTERN] list roles \di[S+] [PATTERN] list indexes \dl[+] list large objects, same as \lo_list \dL[S+] [PATTERN] list procedural languages \dm[S+] [PATTERN] list materialized views \dn[S+] [PATTERN] list schemas \do[S+] [OPPTRN [TYPEPTRN [TYPEPTRN]]] list operators \dO[S+] [PATTERN] list collations \dp[S] [PATTERN] list table, view, and sequence access privileges \dP[itn+] [PATTERN] list [only index/table] partitioned relations [n=nested] \drds [ROLEPTRN [DBPTRN]] list per-database role settings \drg[S] [PATTERN] list role grants \dRp[+] [PATTERN] list replication publications \dRs[+] [PATTERN] list replication subscriptions \ds[S+] [PATTERN] list sequences \dt[S+] [PATTERN] list tables \dT[S+] [PATTERN] list data types \du[S+] [PATTERN] list roles \dv[S+] [PATTERN] list views \dx[+] [PATTERN] list extensions \dX [PATTERN] list extended statistics \dy[+] [PATTERN] list event triggers \l[+] [PATTERN] list databases \lo_list[+] list large objects \sf[+] FUNCNAME show a function's definition \sv[+] VIEWNAME show a view's definition \z[S] [PATTERN] same as \dp ``` For more information about meta-commands, see [PostgreSQL Meta-Commands](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMANDS). ### How to Use Meta-Commands To use a meta-command in the SQL Editor: 1. Enter the meta-command in the editor, just like you would a SQL query. 1. Press **Run**. The result of the meta-command will be displayed in the output pane, similar to how SQL query results are shown. For example, here's the schema for the `playing_with_neon` table we created above, using the meta-command `\d playing_with_neon`: ## AI features The Neon SQL Editor offers three AI-driven features: - **SQL generation**: Easily convert natural language requests to SQL. Press the ✨ button or **Cmd/Ctrl+Shift+M**, type your request, and the AI assistant will generate the corresponding SQL for you. It's schema-aware, meaning you can reference any table names, functions, or other objects in your schema. - **Fix with AI**: If your query returns an error, simply click **Fix with AI** next to the error message. The AI assistant will analyze the error, suggest a fix, and update the SQL Editor so you can run the query again. - **AI-generated query names**: Descriptive names are automatically assigned to your queries in the Neon SQL Editor's **History**. This feature helps you quickly identify and reuse previously executed queries. **Important**: To enhance your experience with the Neon SQL Editor's AI features, we share your database schema with the AI agent. No actual data is shared. We currently use AWS Bedrock as our LLM provider, ensuring all requests remain within AWS's secure infrastructure where other Neon resources are also managed. _There is a maximum limit of 5 AI requests every 60 seconds._ --- # Source: https://neon.com/llms/get-started-signing-up.txt # Learn the basics > This document guides users through the process of signing up for Neon, detailing the steps required to create an account and access Neon's database services. ## Source - [Learn the basics HTML](https://neon.com/docs/get-started/signing-up): The original HTML version of this documentation What you will learn: - How to view and modify data in the console - Create an isolated database copy per developer - Reset your branch to production when ready to start new work Related topics: - [About branching](https://neon.com/docs/introduction/branching) - [Branching workflows](https://neon.com/docs/get-started/workflow-primer) - [Connect Neon to your stack](https://neon.com/docs/get-started/connect-neon) This tutorial walks you through your first steps using Neon as your Postgres database. You'll explore the Neon object hierarchy and learn how database branching can simplify your development workflow. ## About branching Each [branch](https://neon.com/docs/introduction/branching) is a fully-isolated copy of its parent. We suggest creating a long-term branch for each developer on your team to maintain consistent connection strings. You can reset your development branch to production whenever needed. After signing up, you'll start with two branches: - A `production` branch (the default branch) intended for your production workload, configured with a larger compute size (1-4 CU) - A `development` branch (created as a child of production) that you can use for local development, configured with a smaller compute size (0.25-1 CU) You can change these sizes at any time, but these are meant to align with typical usage, where production will need more compute than your less active development branches. ## Sign up
If you're already signed up or coming to Neon from **Azure**, you can skip ahead to [Step 2](https://neon.com/docs/get-started/signing-up#step-2-onboarding-in-the-neon-console). If you haven't signed up yet, you can sign up for free here: [https://console.neon.tech/signup](https://console.neon.tech/signup) Sign up with your email, GitHub, Google, or other partner account. For information about what's included with the Free and paid plans, see [Neon plans](https://neon.com/docs/introduction/plans).
## Onboarding in the Neon Console After you sign up, you are guided through some onboarding steps that ask you to create a **Project**. The steps should be self-explanatory, but it's important to understand a few key points: - **In Neon, everything starts with the _Project_** It is the top-level container that holds your branches, databases, and roles. Typically, you should create a project for each repository in your application. This allows you to manage your database branches just like you manage your code branches: a branch for production, staging, development, new features, previews, and so forth. - **We create two branches for you** - `production` is the default (primary) branch and hosts your database, role, and a compute that you can connect your application to - `development` is created as a child branch of production for your development work At this point, if you want to just get started connecting Neon to your toolchain, go to [Day 2 - Connecting Neon to your tools](https://neon.com/docs/get-started/connect-neon). Or if you want a more detailed walkthrough of some of our key console and branching features, let's keep going. ## Add sample data Let's get familiar with the **SQL Editor**, where you can run queries against your databases directly from the Neon Console, as well as access more advanced features like [Time Travel](https://neon.com/docs/guides/time-travel-assist) and [Explain and Analyze](https://neon.com/docs/get-started/query-with-neon-sql-editor#explain-and-analyze). From the Neon Console, use the sidebar navigation to open the **SQL Editor** page. Notice that your default branch `production` is already selected, along with the database created during onboarding, `neondb`. The first time you open the SQL Editor for a new project, the editor includes placeholder SQL commands to create and populate a new sample table called `playing_with_neon`. For this tutorial, go ahead and create this sample table: click **Run**. Every query you run in the SQL Editor is automatically saved with an AI-generated description, making it easy to find and reference your work later. For example, the sample table creation above will be saved with a description like "create and populate sample table in Neon". You can view your query history anytime by clicking the **History** button in the SQL Editor. Or if you want to add the table from the command line and you already have `psql` installed: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` Your default branch `production` now has a table with some data. ## Try the AI Assistant Now that you have some sample data, let's explore how the AI Assistant can help you write SQL queries using natural language prompts. From the SQL Editor, click the **AI Assistant** button in the top-right corner and try a few prompts: - _Add three more rows to the playing_with_neon table with tech company names_ - _Show me the highest value in the table_ - _Calculate the average value grouped by the first letter of the name_ Each query you run is automatically saved with an AI-generated description, making it easy to find and reuse queries later. For example, when you ask the AI Assistant to add company data, you should see a response like: ```sql -- Text to SQL original prompt: -- Add three more rows to the playing_with_neon table with tech company names INSERT INTO public.playing_with_neon (name, value) VALUES ('Google', 1000.5), ('Apple', 1200.75), ('Microsoft', 950.25); ``` With the description: "Add tech companies to playing_with_neon table" Learn more about AI features in the [SQL Editor documentation](https://neon.com/docs/get-started/query-with-neon-sql-editor#ai-features). ## View and modify data in the console Now that you have some data to play with, let's take a look at it on the **Tables** page in the Neon Console. The **Tables** page, powered by [Drizzle Studio](https://orm.drizzle.team/drizzle-studio/overview), provides a visual interface for exploring and modifying data directly from the console. The integration with Drizzle Studio provides the ability to add, update, and delete records, filter data, add or remove columns, drop or truncate tables, and export data in `.json` and `.csv` formats. For a detailed guide on how to interact with your data using the **Tables** page, visit [Managing your data with interactive tables](https://neon.com/docs/guides/tables). ## Working with your development branch Your project comes with a `development` branch that's an isolated copy of your `production` branch. Let's learn how to use the Neon CLI to manage branches and make some schema changes in your development environment. 1. **Install CLI with Brew or NPM** Depending on your system, you can install the Neon CLI using either Homebrew (for macOS) or NPM (for other platforms). - For macOS using Homebrew: ```bash brew install neonctl ``` - Using NPM (applicable for all platforms that support Node.js): ```bash npm install -g neonctl ``` 2. **Authenticate with Neon** The `neon auth` command launches a browser window where you can authorize the Neon CLI to access your Neon account. ```bash neon auth ``` 3. **View your branches** ```bash neon branches list ``` This command shows your existing branches, including the `production` and `development` branches. ## Make some sample schema changes First, let's make sure our development branch is in sync with production. This ensures we're starting from the same baseline: ```bash neon branches reset development --parent ``` Now that our development branch matches production, we can make some changes. The `playing_with_neon` table from production is now available in your `development` branch, and we'll modify its schema and add new data to demonstrate how branches can diverge. You can use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) for this, but let's demonstrate how to connect and modify your database from the terminal using `psql`. If you don't have `psql` installed already, follow these steps to get set up: Tab: Mac ```bash brew install libpq echo 'export PATH="/opt/homebrew/opt/libpq/bin:$PATH"' >> ~/.zshrc source ~/.zshrc ``` Tab: Linux ```bash sudo apt update sudo apt install postgresql-client ``` Tab: Windows Download and install PostgreSQL from: https://www.postgresql.org/download/windows/ Ensure psql is included in the installation. With `psql` available, let's work from the terminal to connect to your `development` branch's database and make changes. 1. **Connect to your database** Get the connection string to your branch and connect to it directly via `psql`: ```bash neon connection-string development --database-name neondb --psql ``` This command establishes the psql terminal connection to the `neondb` database on your development branch. 1. **Modify the schema** Add a new column `description` and index it: ```sql ALTER TABLE playing_with_neon ADD COLUMN description TEXT; CREATE INDEX idx_playing_with_neon_description ON playing_with_neon (description); ``` 1. **Insert new data** Add new data that will be exclusive to the dev branch. ```sql INSERT INTO playing_with_neon (name, description) VALUES ('Your dev branch', 'Exploring schema changes in the dev branch'); ``` 1. **Verify the schema changes** Query the table to verify your schema changes: ```sql SELECT * FROM playing_with_neon; ``` Your response should include the new description column and a new row where name = `Your dev branch` and description = `Exploring schema changes in the dev branch`: ```sql {1,13} id | name | value | description ----+--------------------+-------------+-------------------------------------------- 1 | c4ca4238a0 | 0.5315024 | 2 | c81e728d9d | 0.17189825 | 3 | eccbc87e4b | 0.21428405 | 4 | a87ff679a2 | 0.9721639 | 5 | e4da3b7fbb | 0.8649301 | 6 | 1679091c5a | 0.48413596 | 7 | 8f14e45fce | 0.82630277 | 8 | c9f0f895fb | 0.99945337 | 9 | 45c48cce2e | 0.054623786 | 10 | d3d9446802 | 0.36634886 | 11 | Your dev branch | | Exploring schema changes in the dev branch (11 rows) ``` ## Check your changes with Schema Diff After making the schema changes to your development branch, you can use the [Schema Diff](https://neon.com/docs/guides/schema-diff) feature to compare your branch against its parent branch. Schema Diff is a GitHub-style code-comparison tool used to visualize differences between different branch's databases. For this tutorial, Schema Diff helps with validating isolation: it confirms that schema changes made in your isolated development branch remain separate from the production branch. From the **Branches** page in the Neon Console: 1. Open the detailed view for your `development` branch and click **Open schema diff**. 2. Verify the right branches are selected and click **Compare**. You can see the schema changes we added to our development branch highlighted in green. ### Schema Migrations A more typical scenario for Schema Diff is when preparing for schema migrations. While Neon does not provide built-in schema migration tools, you can use ORMs like [Drizzle](https://drizzle.team/) or [Prisma](https://www.prisma.io/) to handle schema migrations efficiently. Read more about using Neon in your development workflow in [Connect Neon to your stack](https://neon.com/docs/get-started/connect-neon). ## Reset your development branch to production After experimenting with changes in your development branch, let's now reset the branch to `production`, its parent branch. [Branch reset](https://neon.com/docs/guides/reset-from-parent) functions much like a `git reset –hard parent` in traditional Git workflows. Resetting your development branches to your production branch ensures that all changes are discarded, and your branch reflects the latest stable state of `production`. This is key to maintaining a clean slate for new development tasks and is one of the core advantages of Neon's branching capabilities. You can reset to parent from the **Branches** page of the Neon Console, but here we'll use the Neon CLI. Use the following command to reset your `development` branch to the state of the `production` branch: Example: ```bash neon branches reset development --parent ``` If you go back to your **Schema Diff** and compare branches again, you'll see they are now identical: ### When to reset your branch Depending on your development workflow, you can use branch reset: - **After a feature is completed and merged** Once your changes are merged into `production`, reset the development branch to start on the next feature. - **When you need to abandon changes** If a project direction changes or if experimental changes are no longer needed, resetting the branch quickly reverts to a known good state. - **As part of your CI/CD automation** With the Neon CLI, you can include branch reset as an enforced part of your CI/CD automation, automatically resetting a branch when a feature is closed or started. Make sure that your development team is always working from the latest schema and data by including branch reset in your workflow. To read more about using branching in your workflows, see [Day 3 - Branching workflows](https://neon.com/docs/get-started/workflow-primer). **Tip** working with sensitive data?: Neon also supports schema-only branching. [Learn more](https://neon.com/docs/guides/branching-schema-only). --- # Source: https://neon.com/llms/get-started-why-neon.txt # Why Neon? > The document "Why Neon?" outlines the key features and advantages of using Neon, a cloud-native Postgres database service, emphasizing its scalability, cost-efficiency, and developer-friendly architecture tailored for modern applications. ## Source - [Why Neon? HTML](https://neon.com/docs/get-started/why-neon): The original HTML version of this documentation Looking back at Neon's debut blog post, [SELECT 'Hello, World'](https://neon.com/blog/hello-world), the fundamental reasons for **Why Neon** remain the same: - **To build the best Postgres experience in the cloud** This is still our core mission today. It was clear to us then, as it is now, that database workloads are shifting to the cloud — and no one wants to manage a database themselves. - **In an ever-changing technology stack, we believe Postgres is here to stay** Just like the Linux operating system or Git version control, we believe Postgres is the default choice for a relational database system. That's why all of the major platforms like AWS, Azure, Google Cloud, Digital Ocean, and many newcomers to this space offer Postgres as a service. - **An idea that a modern Postgres cloud service can be designed differently** We call this approach _separation of storage and compute_, which lets us architect the service around performance, reliability, manageability, and cost-efficiency. - **The belief that our architecture can provide a better Developer Experience (DevX)** Features such as autoscaling, branching, time travel, instant provisioning, and instant restore improve the developer experience by allowing quick environment setup, efficient developer workflows, and immediate database availability. These are Neon's reasons, but given the many _database-as-a-service_ options available today, let's take a look at the reasons why **you** should choose Neon: ## Neon is Postgres **Postgres is the world's most popular open-source database.** From its beginning as a [DARPA-sponsored project at Berkeley](https://www.postgresql.org/docs/current/history.html), Postgres has fostered an ever-growing community and is a preferred database among developers because of its performance, reliability, extensibility, and support for features like ACID transactions, advanced SQL, and NoSQL/JSON. Neon supports all of the latest Postgres versions and numerous [Postgres extensions](https://neon.com/docs/extensions/pg-extensions). **If your application runs on Postgres, it runs on Neon**. If it doesn't run on Postgres, [sign up](https://console.neon.tech/signup) for a Free plan account, join our [Discord server](https://discord.gg/92vNTzKDGp), and start the journey with us. ## Neon is serverless **A serverless architecture built for performance, reliability, manageability, and cost efficiency** Neon's [architecture](https://neon.com/docs/introduction/architecture-overview) separates compute from storage, which enables serverless features like instant provisioning, [autoscaling](https://neon.com/docs/get-started/production-readiness#autoscaling), [scale to zero](https://neon.com/docs/get-started/production-readiness#scale-to-zero), and more. Separating compute from storage refers to an architecture where the database computation processes (queries, transactions, etc.) are handled by one set of resources (compute), while the data itself is stored on a separate set of resources (storage). This design contrasts with traditional architectures where compute and storage are tightly coupled on the same server. In Neon, Postgres runs on a compute, and data (except for what's cached in local compute memory) resides on Neon's storage layer. Separation of compute and storage allows these resources to be scaled independently. You can adjust for processing power or storage capacity as needed without affecting the other. This approach is also cost-efficient. The ability to scale resources independently means you can benefit from the lower cost of storage compared to compute or avoid paying for additional storage when you only require extra processing power. Decoupling compute and storage also improves availability and durability, as data remains accessible and safe, even if a compute fails. [Read more about the benefits of Neon's serverless architecture](https://neon.com/docs/introduction/serverless) and how it supports database-per-user architectures, variable workloads, database branching workflows, and [AI agents](https://neon.com/use-cases/ai-agents). **Tip** Did you know?: Neon's autoscaling feature instantly scales your compute and memory resources. **No manual intervention or restarts are required.** ## Neon is fully managed **Leave the database administrative, maintenance, and scaling burdens to us.** Being a fully managed service means that Neon provides high availability without requiring users to handle administrative, maintenance, or scaling burdens associated with managing a database system. This approach allows developers to focus more on developing applications and less on the operational aspects of database management. Neon takes care of the complexities of scaling, backups, maintenance, and ensuring availability, enabling developers to manage their data without worrying about the underlying infrastructure. ## Neon is open source **Neon is developed under an Apache 2.0 license.** Neon offers separation of storage and compute for Postgres, providing a modern, cloud-native approach to database architecture. We believe we have an opportunity to define the standard for cloud Postgres. We carefully designed our storage, focusing on cloud independence, performance, manageability, DevX, and cost. We chose the most permissive open-source license, Apache 2.0, and invited the world to participate. You can already build and run your own self-hosted instance of Neon. Check out our [neon GitHub repository](https://github.com/neondatabase) and the [#self-hosted](https://discord.com/channels/1176467419317940276/1184894814769127464) channel on our Discord server. ## Neon doesn't lock you in **As a true Postgres platform, there's no lock-in with Neon.** Building on Neon is building on Postgres. If you are already running Postgres, getting started is easy. [Import your data](https://neon.com/docs/import/import-intro) and [connect](https://neon.com/docs/connect/connect-intro). Migrating from other databases like MySQL or MongoDB is just as easy. If you need to move data, you won't have to tear apart your application to remove proprietary application layers. Neon is pro-ecosystem and pro-integration. We encourage you to build with the frameworks, platforms, and services that best fit your requirements. Neon works to enable that. Check out our ever-expanding collection of [framework](https://neon.com/docs/get-started/frameworks), [language](https://neon.com/docs/get-started/languages), and [integration](https://neon.com/docs/guides/integrations) guides. ## Who should use Neon? **You. And we're ready to help you get started.** Neon is designed for a wide range of users, from individual developers to enterprises, seeking modern, serverless Postgres capabilities. It caters to those who need a fully managed, scalable, and cost-effective database solution. Key users include: - **Individual developers** looking for a fast and easy way to set up a Postgres database without the hassle of installation or configuration. Neon's Free plan makes it easy to get started. [Free plan](https://neon.com/docs/introduction/plans) users get access to all regions and features like connection pooling and branching. When you are ready to scale, you can easily upgrade your account to a paid plan for more computing power, storage, and advanced features. **Tip** Neon's Free plan is here to stay: Neon's Free plan is a fundamental part of our commitment to users. Our architecture, which separates storage and compute, enables a sustainable Free plan. You can build your personal project or PoC with confidence, knowing that our Free plan is here to stay. [Learn more about our Free plan from Neon's CEO](https://twitter.com/nikitabase/status/1758639571414446415). - **Teams and organizations** that aim to enhance their development workflows with the ability to create database branches for testing new features or updates, mirroring the branching process used in code version control. - **Enterprises** requiring scalable, high-performance database solutions with advanced features like autoscaling, scale to zero, instant restore, and logical replication. Enterprises can benefit from custom pricing, higher resource allowances, and enterprise-level support to meet their specific requirements. - **AI agents** that need to rapidly provision Postgres databases, execute SQL queries, and efficiently manage Neon infrastructure. With one-second provision times, scale-to-zero compute, and agent-friendly client interfaces, Neon enables AI agents to manage database fleets at scale while keeping costs low. AI agents are on track to surpass humans in the number of databases created on the Neon platform. [Learn more about this use case](https://neon.com/use-cases/ai-agents). In summary, Neon is built for anyone who requires a Postgres database and wants to benefit from the scalability, ease of use, cost savings, and advanced DevX capabilities provided by Neon's serverless architecture. ## Neon makes it easy to get started with Postgres **Set up your Postgres database in seconds.** 1. [Log in](https://console.neon.tech/signup) with an email address, Google, or GitHub account. 2. Provide a project name and database name, and select a region. 3. Click **Create Project**. Neon's architecture allows us to spin up a Postgres database almost instantly and provide you with a database URL, which you can plug into your application or database client. ```sql postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Additionally, after signing up, we land you on your project dashboard, where you'll find connection snippets for various frameworks, languages, and platforms. If you are not quite ready to hook up an application, you can explore Neon from the console. Create the `playing_with_neon` table using the Neon [SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), run some queries, or create a database branch. Initially, you'll be signed up for Neon's [Free plan](https://neon.com/docs/introduction/plans), but you can easily upgrade to one of our paid plans when you're ready. --- # Source: https://neon.com/llms/get-started-workflow-primer.txt # Database branching workflow primer > The "Database Branching Workflow Primer" document outlines the process for creating and managing database branches in Neon, enabling users to efficiently handle development and testing environments. ## Source - [Database branching workflow primer HTML](https://neon.com/docs/get-started/workflow-primer): The original HTML version of this documentation With Neon, you can work with your data just like you work with your code. The key is Neon's database [branching](https://neon.com/docs/guides/branching-intro) feature, which lets you instantly create branches of your data that you can include in your workflow — as many branches as you need. Neon branches are: - **Isolated**: changes made to a branch don't affect its parent. - **Fast to create**: creating a branch takes ~1 second, regardless of the size of your database. - **Cost-effective**: you're only billed for unique data across all branches, and they scale to zero when not in use (you can configure this behavior for every branch). - **Ready to use**: branches will have the parent branch's schema and all its data (you can also include data up to a certain point in time). If you're working with sensitive data, Neon also supports a [schema-only branching](https://neon.com/docs/guides/branching-schema-only) option. Every Neon branch has a unique Postgres connection string, so they're completely isolated from one another. ```bash # Branch 1 postgresql://database_name_owner:AbC123dEf@ep-shiny-cell-a5y2zuu0.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require # Branch 2 postgresql://database_name_owner:AbC123dEf@ep-hidden-hall-a5x58cuv.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` You can create all of your branches from the default branch, or set up a dedicated branch that you use as a base. The first approach is simpler, while the second provides greater data isolation. ## Create branch methods You can use either the Neon CLI or GitHub actions to incorporate branching into your workflow. ### Neon CLI Using the [Neon CLI](https://neon.com/docs/reference/neon-cli), you can create branches without leaving your editor or automate branch creation in your CI/CD pipeline. And here are the key CLI actions you can use: ```bash # Create branch neon branches create [options] # Get Connection string neon connection-string [branch] [options] # Delete branch neon branches delete [options] ``` For more information, see: - [Branching with the Neon CLI](https://neon.com/docs/guides/branching-neon-cli): Learn about branching with the Neon CLI - [Neon CLI Reference](https://neon.com/docs/reference/neon-cli): Reference for all commands in the Neon CLI ### GitHub Actions If you're using GitHub Actions for your CI workflows, Neon provides GitHub Actions for [creating](https://neon.com/docs/guides/branching-github-actions#create-branch-action), [deleting](https://neon.com/docs/guides/branching-github-actions#delete-branch-action), and [resetting](https://neon.com/docs/guides/branching-github-actions#reset-from-parent-action) branches — and there's also a [schema diff action](https://neon.com/docs/guides/branching-github-actions#schema-diff-action). Tab: Create branch Here is an example of what a create branch action might look like: ```yaml name: Create Neon Branch with GitHub Actions Demo run-name: Create a Neon Branch 🚀 jobs: Create-Neon-Branch: uses: neondatabase/create-branch-action@v5 with: project_id: rapid-haze-373089 parent_id: br-long-forest-224191 branch_name: from_action_reusable api_key: {{ secrets.NEON_API_KEY }} id: create-branch - run: echo project_id ${{ steps.create-branch.outputs.project_id}} - run: echo branch_id ${{ steps.create-branch.outputs.branch_id}} ``` Tab: Delete branch Here is an example of what a delete branch action might look like: ```yaml name: Delete Neon Branch with GitHub Actions run-name: Delete a Neon Branch 🚀 on: push: branches: - 'production' jobs: delete-neon-branch: uses: neondatabase/delete-branch-action@v3 with: project_id: rapid-haze-373089 branch: br-long-forest-224191 api_key: { { secrets.NEON_API_KEY } } ``` You can find these GitHub Actions here: - [Create branch Action](https://github.com/neondatabase/create-branch-action): Create Neon Branch GitHub Action - [Delete Branch Action](https://github.com/neondatabase/delete-branch-action): Delete Neon Branch GitHub Action - [Reset Branch Action](https://github.com/neondatabase/reset-branch-action): Reset Neon Branch GitHub Action - [Schema Diff Action](https://github.com/neondatabase/schema-diff-action): Neon Schema Diff GitHub Action For more detailed documentation, see [Automate branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions). ## A branch for every environment Here's how you can integrate Neon branching into your workflow: ### Development You can create a Neon branch for every developer on your team. This ensures that every developer has an isolated environment that includes schemas and data. These branches are meant to be long-lived, so each developer can tailor their branch based on their needs. With Neon's [branch reset capability](https://neon.com/docs/manage/branches#reset-a-branch-from-parent), developers can refresh their branch with the latest schemas and data anytime they need. **Tip**: To easily identify branches dedicated to development, we recommend prefixing the branch name with `dev/` or `dev/` if multiple developers collaborate on the same development branch. Examples: ```bash dev/alice dev/new-onboarding ``` ### Preview environments Whenever you create a pull request, you can create a Neon branch for your preview deployment. This allows you to test your code changes and SQL migrations against production-like data. **Tip**: We recommend following this naming convention to identify these branches easily: ```bash preview/pr-- ``` Example: ```bash preview/pr-123-feat/new-login-screen ``` You can also automate branch creation for every preview. These example applications show how to create Neon branches with GitHub Actions for every preview environment. - [Preview branches with Fly.io](https://github.com/neondatabase/preview-branches-with-fly): Sample project showing you how to create a branch for every Fly.io preview deployment - [Preview branches with Vercel](https://github.com/neondatabase/preview-branches-with-vercel): Sample project showing you how to create a branch for every Vercel preview deployment ### Testing When running automated tests that require a database, each test run can have its branch with its own compute resources. You can create a branch at the start of a test run and delete it at the end. **Tip**: We recommend following this naming convention to identify these branches easily: ```bash test/ ``` The time of the test execution can be an epoch UNIX timestamp (e.g., 1704305739). For example: ```bash test/feat/new-login-loginPageFunctionality-1a2b3c4d-20240211T1530 ``` You can create test branches from the same date and time or Log Sequence Number (LSN) for tests requiring static or deterministic data. --- # Source: https://neon.com/llms/guides-askyourdatabase.txt # Chat with Neon Postgres with AskYourDatabase > The document explains how to use AskYourDatabase to interact with Neon Postgres databases through natural language queries, enabling users to efficiently retrieve and manage data without traditional SQL commands. ## Source - [Chat with Neon Postgres with AskYourDatabase HTML](https://neon.com/docs/guides/askyourdatabase): The original HTML version of this documentation AskYourDatabase is the ChatGPT for SQL databases, enabling you to interact with your SQL databases using natural language. You can use it for data management, business intelligence, schema design & migration, data visualization, and more. To learn more, see [AskYourDatabase](https://www.askyourdatabase.com/). This guide shows how to connect from AskYourDatabase to Neon Postgres. ## Prerequisites - AskYourDatabase Desktop app. See [Download AskYourDatabase](https://www.askyourdatabase.com/download). - A Neon project. See [Create a Neon project](https://neon.com/docs/manage/projects#create-a-project). ## Connect to Neon from AskYourDatabase 1. Get the Neon URL by navigating to the Neon Console and copying the connection string. The URL will look something like this: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` 2. Go to AskYourDatabase and click **Connect to your database**: 3. Select PostgreSQL as your database type, and paste your connection string: 4. A new chat session opens if the connection is successful: ## Chat with your data Within the chat session, you can start asking your database questions. For example, suppose you have a `user` table with a column named `dbType` that indicates the type of database. If you want to know what the four most popular databases are and visualize the distribution in a pie chart, you can quickly and easily do so with a natural language question, as shown below: ## What's more AskYourDatabase also supports a customer-facing chatbot that can connect to a Neon Postgres database. You can embed the chatbot in your existing website, enabling your customers to explore analytics data by asking questions in natural language. To learn more, see [Create and Integrate Chatbot](https://www.askyourdatabase.com/docs/chatbot), in the AskYourDatabase documentation. --- # Source: https://neon.com/llms/guides-astro.txt # Connect Astro to Postgres on Neon > This document guides users on connecting Astro applications to a Postgres database hosted on Neon, detailing the necessary steps and configurations for seamless integration. ## Source - [Connect Astro to Postgres on Neon HTML](https://neon.com/docs/guides/astro): The original HTML version of this documentation Astro builds fast content sites, powerful web applications, dynamic server APIs, and everything in-between. This guide describes how to create a Neon Postgres database and access it from an Astro site or application. To create a Neon project and access it from an Astro site or application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create an Astro project and add dependencies 1. Create an Astro project if you do not have one. For instructions, see [Getting Started](https://docs.astro.build/en/getting-started/), in the Astro documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client There a multiple ways to make server side requests with Astro. See below for two of those options: **astro files** and **Server Endpoints (API Routes)**. ### astro files In your `.astro` files, use the following code snippet to connect to your Neon database: Tab: node-postgres ```astro --- import { Pool } from 'pg'; const pool = new Pool({ connectionString: import.meta.env.DATABASE_URL, ssl: true }); const client = await pool.connect(); let data = null; try { const response = await client.query('SELECT version()'); data = response.rows[0].version; } finally { client.release(); } --- {data} ``` Tab: postgres.js ```astro --- import postgres from 'postgres'; const sql = postgres(import.meta.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; const data = response[0].version; --- {data} ``` Tab: Neon serverless driver ```astro --- import { neon } from '@neondatabase/serverless'; const sql = neon(import.meta.env.DATABASE_URL); const response = await sql`SELECT version()`; const data = response[0].version; --- {data} ``` #### Run the app When you run `npm run dev` you can expect to see the following when you visit [localhost:4321](localhost:4321): ```shell PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ### Server Endpoints (API Routes) In your server endpoints (API Routes) in Astro application, use the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript // File: src/pages/api/index.ts import { Pool } from 'pg'; const pool = new Pool({ connectionString: import.meta.env.DATABASE_URL, ssl: true, }); export async function GET() { const client = await pool.connect(); let data = {}; try { const { rows } = await client.query('SELECT version()'); data = rows[0]; } finally { client.release(); } return new Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json' } }); } ``` Tab: postgres.js ```javascript // File: src/pages/api/index.ts import postgres from 'postgres'; export async function GET() { const sql = postgres(import.meta.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return new Response(JSON.stringify(response[0]), { headers: { 'Content-Type': 'application/json' }, }); } ``` Tab: Neon serverless driver ```javascript // File: src/pages/api/index.ts import { neon } from '@neondatabase/serverless'; export async function GET() { const sql = neon(import.meta.env.DATABASE_URL); const response = await sql`SELECT version()`; return new Response(JSON.stringify(response[0]), { headers: { 'Content-Type': 'application/json' }, }); } ``` #### Run the app When you run `npm run dev` you can expect to see the following when you visit the [localhost:4321/api](localhost:4321/api) route: ```shell { version: 'PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit' } ``` ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with Astro and Neon](https://github.com/neondatabase/examples/tree/main/with-astro) - [Get started with Astro API Routes and Neon](https://github.com/neondatabase/examples/tree/main/with-astro-api-routes) --- # Source: https://neon.com/llms/guides-auth-auth0.txt # Authenticate Neon Postgres application users with Auth0 > This document guides Neon users on integrating Auth0 for authenticating Postgres application users, detailing the setup process and configuration steps necessary for seamless authentication. ## Source - [Authenticate Neon Postgres application users with Auth0 HTML](https://neon.com/docs/guides/auth-auth0): The original HTML version of this documentation User authentication is an essential part of most web applications. Modern apps often require features like social login, multi-factor authentication, and secure user data management that complies with privacy regulations. [Auth0](https://auth0.com/) is an authentication and authorization platform that provides these features out of the box. It offers SDKs for popular web frameworks, making it straightforward to integrate with your application backed by a Neon Postgres database. In this guide, we'll walk through setting up a simple Next.js application using Neon Postgres as the database, and add user authentication using [Auth0](https://auth0.com/). We will cover how to: - Set up a Next.js project with Auth0 for authentication - Create a Neon Postgres database and connect it to your application - Define a database schema using Drizzle ORM and generate migrations - Store and retrieve user data associated with Auth0 user IDs ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - An [Auth0](https://auth0.com/) account for user authentication. Auth0 provides a free plan to get started. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Initialize your Next.js project We will create a simple web app that lets you add a favorite quote to the home page, and edit it afterward. Run the following command in your terminal to create a new `Next.js` project: ```bash npx create-next-app guide-neon-next-auth0 --typescript --eslint --tailwind --use-npm --no-src-dir --app --import-alias "@/*" ``` Now, navigate to the project directory and install the required dependencies: ```bash npm install @neondatabase/serverless drizzle-orm npm install -D drizzle-kit dotenv npm install @auth0/nextjs-auth0 ``` We use the `@neondatabase/serverless` package as the Postgres client, and `drizzle-orm`, a lightweight typescript ORM, to interact with the database. `@auth0/nextjs-auth0` is the Auth0 SDK for Next.js applications. We also use `dotenv` to manage environment variables and the `drizzle-kit` CLI tool for generating database migrations. Also, add a `.env.local` file to the root of your project, which we'll use to store Neon/Auth0 connection parameters: ```bash touch .env.local ``` **Note**: At the time of this post, the `@auth0/nextjs-auth0` package caused import errors related to one of its dependencies (`oauth4webapi`). To stop Next.js from raising the error, add the following to your `nextjs.config.mjs` file: ```js /** @type {import('next').NextConfig} */ const nextConfig = { experimental: { esmExternals: 'loose' }, }; export default nextConfig; ``` Now, we can start building the application. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select an existing project or click the **New Project** button to create a new one. 3. Choose the desired region and Postgres version for your project, then click **Create Project**. ### Retrieve your Neon database connection string You can find your connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Add this connection string to the `.env.local` file in your Next.js project. ```bash # .env.local DATABASE_URL=NEON_DB_CONNECTION_STRING ``` ## Configuring Auth0 for authentication ### Create an Auth0 application 1. Log in to your Auth0 account and navigate to the [Dashboard](https://manage.auth0.com/dashboard/). From the left sidebar, select `Applications > Create Application` to create a new app. 2. In the dialog that appears, provide a name for your application, select `Regular Web Applications` as the application type, and click `Create`. ### Configure Auth0 application settings 1. In the `Settings` tab of your Auth0 application, scroll down to the `Application URIs` section. 2. Set the `Allowed Callback URLs` to `http://localhost:3000/api/auth/callback` for local development. 3. Set the `Allowed Logout URLs` to `http://localhost:3000`. 4. Click `Save Changes` at the bottom of the page. ### Retrieve your Auth0 domain and client ID From the `Settings` tab of your Auth0 application, copy the `Domain` and `Client ID` values. Add these to the `.env.local` file in your Next.js project: ```bash # .env.local AUTH0_SECRET='random-32-byte-value' AUTH0_BASE_URL='http://localhost:3000' AUTH0_ISSUER_BASE_URL='https://YOUR_AUTH0_DOMAIN' AUTH0_CLIENT_ID='YOUR_AUTH0_CLIENT_ID' AUTH0_CLIENT_SECRET='YOUR_AUTH0_CLIENT_SECRET' ``` Replace `YOUR_AUTH0_DOMAIN`, `YOUR_AUTH0_CLIENT_ID` and `YOUR_AUTH0_CLIENT_SECRET` with the actual values from your Auth0 application settings. Run the following command in your terminal to generate a random 32-byte value for the `AUTH0_SECRET` variable: ```bash node -e "console.log(crypto.randomBytes(32).toString('hex'))" ``` ## Implementing the application ### Define your database connection and schema Create a `db` folder inside the `app/` directory. This is where we'll define the database schema and connection code. Now, add the file `app/db/index.ts` with the following content: ```typescript /// app/db/index.ts import { neon } from '@neondatabase/serverless'; import { drizzle } from 'drizzle-orm/neon-http'; import { UserMessages } from './schema'; if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL must be a Neon postgres connection string'); } const sql = neon(process.env.DATABASE_URL); export const db = drizzle(sql, { schema: { UserMessages }, }); ``` This exports a `db` instance that we can use to execute queries against the Neon database. Next, create a `schema.ts` file inside the `app/db` directory to define the database schema: ```typescript /// app/db/schema.ts import { pgTable, text, timestamp } from 'drizzle-orm/pg-core'; export const UserMessages = pgTable('user_messages', { user_id: text('user_id').primaryKey().notNull(), createTs: timestamp('create_ts').defaultNow().notNull(), message: text('message').notNull(), }); ``` This schema defines a table `user_messages` to store a message for each user, with the `user_id` provided by Auth0 as the primary key. ### Generate and run migrations We'll use the `drizzle-kit` CLI tool to generate migrations for the schema we defined. To configure how it connects to the database, add a `drizzle.config.ts` file at the project root. ```typescript /// drizzle.config.ts import type { Config } from 'drizzle-kit'; import * as dotenv from 'dotenv'; dotenv.config({ path: '.env.local' }); if (!process.env.DATABASE_URL) throw new Error('DATABASE_URL not found in environment'); export default { schema: './app/db/schema.ts', out: './drizzle', dialect: 'postgresql', dbCredentials: { url: process.env.DATABASE_URL, }, strict: true, } satisfies Config; ``` Now, generate the migration files by running the following command: ```bash npx drizzle-kit generate ``` This will create a `drizzle` folder at the project root with the migration files. To apply the migration to the database, run: ```bash npx drizzle-kit push ``` The `user_messages` table will now be visible in the Neon console. ### Configure Auth0 authentication We create a `dynamic route` to handle the Auth0 authentication flow. Create a new file `app/api/auth/[auth0]/route.ts` with the following content: ```typescript /// app/api/auth/[auth0]/route.ts import { handleAuth, handleLogin } from '@auth0/nextjs-auth0'; export default handleAuth({ login: handleLogin(), }); ``` This sets up the necesssary Auth0 authentication routes for the application at the `/api/auth/auth0/*` endpoints - `login`, `logout`, `callback` (to redirect to after a successful login), and `me` (to fetch the user profile). Next, we will wrap the application with the `UserProvider` component from `@auth0/nextjs-auth0`, so all pages have access to the current user context. Replace the contents of the `app/layout.tsx` file with the following: ```tsx /// app/layout.tsx import type { Metadata } from 'next'; import { Inter } from 'next/font/google'; import './globals.css'; import { getSession } from '@auth0/nextjs-auth0'; import { UserProvider } from '@auth0/nextjs-auth0/client'; const inter = Inter({ subsets: ['latin'] }); export const metadata: Metadata = { title: 'Neon-Next-Auth0 guide', description: 'Generated by create next app', }; async function UserInfoBar() { const session = await getSession(); if (!session) { return null; } const { user } = session; return (
Welcome, {user.name}!{' '} Logout
); } export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return ( {children} ); } ``` ### Add interactivity to the application Our application has a single page that lets the logged-in user store their favorite quote and displays it. We implement `Next.js` server actions to handle the form submission and database interaction. Create a new file at `app/actions.ts` with the following content: ```typescript /// app/actions.ts 'use server'; import { getSession } from '@auth0/nextjs-auth0/edge'; import { UserMessages } from './db/schema'; import { db } from './db'; import { redirect } from 'next/navigation'; import { eq } from 'drizzle-orm'; export async function createUserMessage(formData: FormData) { const session = await getSession(); if (!session) throw new Error('User not authenticated'); const message = formData.get('message') as string; await db.insert(UserMessages).values({ user_id: session.user.sub, message, }); redirect('/'); } export async function deleteUserMessage() { const session = await getSession(); if (!session) throw new Error('User not authenticated'); await db.delete(UserMessages).where(eq(UserMessages.user_id, session.user.sub)); redirect('/'); } ``` The `createUserMessage` function inserts a new message into the `user_messages` table, while `deleteUserMessage` removes the message associated with the current user. Next, we implement a minimal UI to interact with these functions. Replace the contents of the `app/page.tsx` file with the following: ```tsx /// app/page.tsx import { createUserMessage, deleteUserMessage } from './actions'; import { db } from './db'; import { getSession } from '@auth0/nextjs-auth0/edge'; async function getUserMessage() { const session = await getSession(); if (!session) return null; return db.query.UserMessages.findFirst({ where: (messages, { eq }) => eq(messages.user_id, session.user.sub), }); } function LoginBox() { return (
Log in
); } export default async function Home() { const session = await getSession(); const existingMessage = await getUserMessage(); if (!session) { return ; } const ui = existingMessage ? (

{existingMessage.message}

) : (
); return (

{existingMessage ? 'Your quote is wonderful...' : 'Save an inspiring quote for yourself...'}

{ui}
); } ``` This implements a form with a single text field that lets the user input a quote, and submit it, whereby it gets stored in the database, associated with their `Auth0` user ID. If a quote is already stored, it displays the quote and provides a button to delete it. The `getSession` function from `@auth0/nextjs-auth0/edge` provides the current user's session information, which we use to interact with the database on their behalf. If the user is not authenticated, the page displays a login button instead. ## Running the application To start the application, run the following command: ```bash npm run dev ``` This will start the Next.js development server. Open your browser and navigate to `http://localhost:3000` to see the application in action. When running for the first time, you'll be prompted to log in with Auth0. By default, Auth0 provides email and Google account as login options. Once authenticated, you'll be able to visit the home page, add a quote, and see it displayed. ## Conclusion In this guide, we walked through setting up a simple Next.js application with user authentication using Auth0 and a Neon Postgres database. We defined a database schema using Drizzle ORM, generated migrations, and interacted with the database to store and retrieve user data. Next, we can add more routes and features to the application. The `UserProvider` component from `@auth0/nextjs-auth0` provides the user context to each page, allowing you to conditionally render content based on the user's authentication state. To view and manage the users who authenticated with your application, you can navigate to the [Auth0 Dashboard](https://manage.auth0.com/) and click on **User Management > Users** in the sidebar. Here, you can see the list of users who have logged in and perform any necessary actions for those users. ## Source code You can find the source code for the application described in this guide on GitHub. - [Authentication flow with Auth0](https://github.com/neondatabase/guide-neon-next-auth0): Authenticate users of your Neon application with Auth0 ## Resources For more information on the tools used in this guide, refer to the following documentation: - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) - [Next.js Documentation](https://nextjs.org/docs) - [Drizzle ORM](https://orm.drizzle.team/) - [Auth0 Next.js SDK](https://auth0.com/docs/quickstart/webapp/nextjs) --- # Source: https://neon.com/llms/guides-auth-authjs.txt # Authenticate Neon Postgres application users with Auth.js > The document details how to authenticate Neon Postgres application users using Auth.js, outlining the integration process and configuration steps necessary for secure user authentication within Neon environments. ## Source - [Authenticate Neon Postgres application users with Auth.js HTML](https://neon.com/docs/guides/auth-authjs): The original HTML version of this documentation **Tip** Did you know?: We recently introduced an Auth.js adapter for Neon, making it easier to store user and session data in Neon. For installation and setup instructions, see [Neon Adapter](https://authjs.dev/getting-started/adapters/neon). [Auth.js](https://authjs.dev/) (formerly NextAuth.js) is a popular authentication solution that supports a wide range of authentication methods, including social logins (e.g., Google, Facebook), traditional email/password, and passwordless options like magic links. For simple authentication flows, such as social logins, Auth.js can operate using only in-memory session storage (in a browser cookie). However, if you want to implement custom login flows, or persist the signed-in users' information in your database, you need to specify a database backend. For example, passwordless authentication methods like magic links require secure storage of temporary tokens. Magic link login has become increasingly popular since it eliminates the need for users to remember complex passwords, reducing the risk of credential-based attacks. In this guide, we'll walk through setting up a simple Next.js application, using Neon Postgres as the database backend for both Auth.js authentication and application data. We'll use [Resend](https://resend.com/) for sending magic link emails. We will cover how to: - Set up a Next.js project with Auth.js for magic link authentication - Create a Neon Postgres database and configure it as the Auth.js database backend - Configure Resend as an authentication provider - Implement a basic authenticated feature (a simple todo list) ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). We'll use a database named `neondb` in the following examples. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. - A [Resend](https://resend.com/) account for sending emails. Resend offers a free tier to get started. - A domain ## Initialize your Next.js project Run the following command in your terminal to create a new Next.js project: ```bash npx create-next-app guide-neon-next-authjs --typescript --eslint --tailwind --use-npm --no-src-dir --app --import-alias "@/*" ``` Now, navigate to the project directory and install the required dependencies: ```bash cd guide-neon-next-authjs npm install next-auth@beta npm install @auth/pg-adapter @neondatabase/serverless ``` For authentication, we'll use the `Auth.js` library (aliased as v5 of the `next-auth` package), which provides a simple way to add authentication to Next.js applications. It comes with built-in support for Resend as an authentication provider. We use the `@neondatabase/serverless` package as the Postgres client for the `Auth.js` database adapter. Also, add a `.env` file to the root of your project, which we'll use to store the Neon connection string and the Resend API key: ```bash touch .env ``` ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon console and go to the [Projects](https://console.neon.tech/app/projects) section. 2. Click the **New Project** button to create a new project. 3. Choose your preferred region and Postgres version, then click **Create Project**. ### Retrieve your Neon database connection string You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Add this connection string to your `.env` file: ```bash # .env DATABASE_URL="YOUR_NEON_CONNECTION_STRING" ``` ## Configuring Auth.js and Resend ### Set up Resend 1. Sign up for a [Resend](https://resend.com/) account if you don't already have one. 2. In the Resend dashboard, create an API key. 3. Add the API key to your `.env` file: ```bash # .env AUTH_RESEND_KEY="YOUR_RESEND_API_KEY" ``` 4. Optional: Resend requires verification of ownership for the domain you use to send emails from. If you own a domain, you can follow the instructions [here](https://resend.com/docs/dashboard/domains/introduction) to verify ownership. For this example, we'll use the test email address (`onboarding@resend.dev`) to send emails. However, this only works for the email address you use to sign up for a Resend account, so you won't be able to sign in from other email accounts. ### Configure Auth.js Create a new file `auth.ts` in the root directory of the project and add the following content: ```typescript /// auth.ts import NextAuth from 'next-auth'; import Resend from 'next-auth/providers/resend'; import PostgresAdapter from '@auth/pg-adapter'; import { Pool } from '@neondatabase/serverless'; // *DO NOT* create a `Pool` here, outside the request handler. export const { handlers, auth, signIn, signOut } = NextAuth(() => { const pool = new Pool({ connectionString: process.env.DATABASE_URL }); return { adapter: PostgresAdapter(pool), providers: [Resend({ from: 'Test ' })], }; }); ``` This file sets up Auth.js with the Neon Postgres adapter and configures the Email provider for magic link authentication. Additionally, `Auth.js` also requires setting up an `AUTH_SECRET` environment variable, which is used to encrypt cookies and magic tokens. You can use the `Auth.js` CLI to generate one: ```bash npx auth secret ``` Add the generated secret to your `.env` file: ```bash # .env AUTH_SECRET="YOUR_AUTH_SECRET" ``` ### Implement authentication routes Create a new dynamic route at `app/api/auth/[...nextauth]/route.ts` with the following content: ```tsx /// app/api/auth/[...nextauth]/route.ts import { handlers } from '@/auth'; export const { GET, POST } = handlers; ``` This route file imports the authentication handlers from the `auth.ts` file that handle all auth-related requests — sign-in, sign-out, and redirect after authentication. The `auth` object exported from `./auth.ts` is the universal method we can use to interact with the authentication state in the application. For example, we add a message above the main app layout that indicates the current user's name and a sign-out button at the bottom. ## Implementing the application ### Create the database schema Create a new file `app/db/schema.sql` with the following content: ```sql -- Auth.js required tables CREATE TABLE IF NOT EXISTS users ( id SERIAL, name VARCHAR(255), email VARCHAR(255), "emailVerified" TIMESTAMPTZ, image TEXT, PRIMARY KEY (id) ); CREATE TABLE IF NOT EXISTS accounts ( id SERIAL, "userId" INTEGER NOT NULL, type VARCHAR(255) NOT NULL, provider VARCHAR(255) NOT NULL, "providerAccountId" VARCHAR(255) NOT NULL, refresh_token TEXT, access_token TEXT, expires_at BIGINT, token_type TEXT, scope TEXT, id_token TEXT, session_state TEXT, PRIMARY KEY (id) ); CREATE TABLE IF NOT EXISTS sessions ( id SERIAL, "sessionToken" VARCHAR(255) NOT NULL, "userId" INTEGER NOT NULL, expires TIMESTAMPTZ NOT NULL, PRIMARY KEY (id) ); CREATE TABLE IF NOT EXISTS verification_token ( identifier TEXT, token TEXT, expires TIMESTAMPTZ NOT NULL, PRIMARY KEY (identifier, token) ); -- Application-specific table CREATE TABLE IF NOT EXISTS todos ( id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL, content TEXT NOT NULL, completed BOOLEAN NOT NULL DEFAULT FALSE, created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES users(id) ); ``` This schema defines all the tables required for the `Auth.js` library to work, and also the `todos` table that we'll use to store the todo list for each user. To apply this schema to your Neon database, you can use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) in the web console or a database management tool like [psql](https://neon.com/docs/connect/query-with-psql-editor). ### Implement the Todo list feature Create a new file `app/TodoList.tsx`: ```tsx 'use client'; import { useState } from 'react'; type Todo = { id: number; content: string; completed: boolean; }; export default function TodoList({ initialTodos }: { initialTodos: Todo[] }) { const [todos, setTodos] = useState(initialTodos); const [newTodo, setNewTodo] = useState(''); const addTodo = async (e: React.FormEvent) => { e.preventDefault(); if (!newTodo.trim()) return; const response = await fetch('/api/todos', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ content: newTodo }), }); if (response.ok) { const todo = await response.json(); setTodos([...todos, todo]); setNewTodo(''); } }; const toggleTodo = async (id: number) => { const response = await fetch(`/api/todos/${id}`, { method: 'PATCH' }); if (response.ok) { setTodos( todos.map((todo) => (todo.id === id ? { ...todo, completed: !todo.completed } : todo)) ); } }; return (
setNewTodo(e.target.value)} placeholder="Add a new todo" className="mb-2 w-full rounded border p-2" />
    {todos.map((todo) => (
  • toggleTodo(todo.id)} className="flex cursor-pointer items-center space-x-2" > {todo.content}
  • ))}
); } ``` ### Update the main page Replace the contents of `app/page.tsx` with: ```tsx import { auth } from '@/auth'; import TodoList from '@/app/TodoList'; import { Pool } from '@neondatabase/serverless'; async function getTodos(userId: string) { const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const { rows } = await pool.query('SELECT * FROM todos WHERE user_id = $1', [userId]); await pool.end(); return rows; } type Todo = { id: number; content: string; completed: boolean; }; export default async function Home() { const session = await auth(); return (
{!session ? ( <>

Welcome to the Todo App

Please sign in to access your todos.

Sign In ) : ( <>

Welcome, {session.user?.name || session.user?.email}

Sign Out )}
); } ``` ### Create API routes for the todos feature Create a new file `app/api/todos/route.ts`: ```typescript import { NextResponse } from 'next/server'; import { auth } from '@/auth'; import { Pool } from '@neondatabase/serverless'; export async function POST(req: Request) { const session = await auth(); if (!session) { return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }); } const { content } = await req.json(); const pool = new Pool({ connectionString: process.env.DATABASE_URL }); try { const { rows } = await pool.query( 'INSERT INTO todos (user_id, content) VALUES ($1, $2) RETURNING *', [session.user.id, content] ); return NextResponse.json(rows[0]); } catch (error) { return NextResponse.json({ error: 'Failed to create todo' }, { status: 500 }); } finally { await pool.end(); } } ``` This implements a simple API endpoint that allows users to create new todos. Create another file `app/api/todos/[id]/route.ts`: ```typescript import { NextResponse } from 'next/server'; import { auth } from '../../auth/[...nextauth]/route'; import { Pool } from '@neondatabase/serverless'; export async function PATCH(req: Request, { params }: { params: { id: string } }) { const session = await auth(); if (!session) { return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }); } const pool = new Pool({ connectionString: process.env.DATABASE_URL }); try { const { rows } = await pool.query( 'UPDATE todos SET completed = NOT completed WHERE id = $1 AND user_id = $2 RETURNING *', [params.id, session.user.id] ); if (rows.length === 0) { return NextResponse.json({ error: 'Todo not found' }, { status: 404 }); } return NextResponse.json(rows[0]); } catch (error) { return NextResponse.json({ error: 'Failed to update todo' }, { status: 500 }); } finally { await pool.end(); } } ``` This implements a simple API endpoint that allows users to update the status of a todo. ## Running the application To start the application, run: ```bash npm run dev ``` This will start the Next.js development server. Open your browser and navigate to `http://localhost:3000` to see the application in action. When running for the first time, you'll be see a `Sign In` link which will redirect you to the `Auth.js` widget, prompting you to input your email address. Enter your email to receive a magic link. Once authenticated, you'll be able to add and manage your todos. Note that if you are using the test email address (`onboarding@resend.dev`) to send emails, you won't be able to sign in from other email accounts. ## Conclusion In this guide, we demonstrated how to set up a Next.js application with Auth.js for magic link authentication, using Neon Postgres as the database backend for both authentication and application data. We implemented a simple todo list feature to showcase how authenticated users can interact with the application. Next, we can add more routes and features to the application. The `auth` method can be used in the Next.js API routes or middleware to protect endpoints that require authentication. To view and manage the users who authenticated with your application, you can query the `users` table of your Neon project. Similarly, all the generated magic link tokens are logged in the `verification_token` table, making it easy to audit and revoke access to your application. ## Source code You can find the source code for the application described in this guide on GitHub. - [Authentication flow with Auth.js](https://github.com/neondatabase/examples/tree/main/auth/with-authjs-next): Authenticate users of your Neon application with Auth.js ## Resources For more information about the tools and libraries used in this guide, refer to the following documentation: - [Neon Documentation](https://neon.com/docs) - [Auth.js Documentation](https://authjs.dev/) - [Next.js Documentation](https://nextjs.org/docs) - [Resend Documentation](https://resend.com/docs) --- # Source: https://neon.com/llms/guides-auth-clerk.txt # Authenticate Neon Postgres application users with Clerk > The document details the process of integrating Clerk for authenticating users in Neon Postgres applications, outlining steps for configuring authentication and managing user sessions within the Neon environment. ## Source - [Authenticate Neon Postgres application users with Clerk HTML](https://neon.com/docs/guides/auth-clerk): The original HTML version of this documentation User authentication is a critical requirement for web applications. Modern applications require advanced features like social login and multi-factor authentication besides the regular login flow. Additionally, managing personally identifiable information (PII) requires a secure solution compliant with data protection regulations. **Comingsoon**: Looking to manage **authorization** along with authentication? Currently in Early Access for select users, [Neon RLS](https://neon.com/docs/guides/neon-authorize) brings JSON Web Token (JWT) authorization directly to Postgres, where you can use Row-level Security (RLS) policies to manage access at the database level. [Clerk](https://clerk.com/) is a user authentication and identity management platform that provides these features out of the box. It comes with adapters for popular web frameworks, making it easy to integrate with an application backed by a Neon Postgres database. In this guide, we'll walk through setting up a simple Next.js application using Neon Postgres as the database, and add user authentication using [Clerk](https://clerk.com/). We will go over how to: - Set up a Next.js project with Clerk for authentication - Create a Neon Postgres database and connect it to your application - Define a database schema using Drizzle ORM and generate migrations - Store and retrieve user data associated with Clerk user IDs ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A [Clerk](https://clerk.com/) account for user authentication. Clerk provides a free plan that you can use to get started. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Initialize your Next.js project We will create a simple web app that lets you add a favorite quote to the home page, and edit it afterward. Run the following command in your terminal to create a new `Next.js` project: ```bash npx create-next-app guide-neon-next-clerk --typescript --eslint --tailwind --use-npm --no-src-dir --app --import-alias "@/*" ``` Now, navigate to the project directory and install the required dependencies: ```bash npm install @neondatabase/serverless drizzle-orm npm install -D drizzle-kit dotenv npm install @clerk/nextjs ``` We use the `@neondatabase/serverless` package as the Postgres client, and `drizzle-orm`, a lightweight typescript ORM, to interact with the database. `@clerk/nextjs` is the Clerk SDK for Next.js applications. We also use `dotenv` to manage environment variables and the `drizzle-kit` CLI tool for generating database migrations. Also, add a `.env` file to the root of your project, which we'll use to store Neon/Clerk connection parameters: ```bash touch .env ``` Make sure to add an entry for `.env` to your `.gitignore` file, so that it's not committed to your repository. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select an existing project or click the **New Project** button to create a new one. 3. Choose the desired region and Postgres version for your project, then click **Create Project**. ### Retrieve your Neon database connection string You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Add this connection string to the `.env` file in your Next.js project. ```bash # .env DATABASE_URL=NEON_DB_CONNECTION_STRING ``` ## Configuring Clerk for authentication ### Create a Clerk application 1. Log in to the [Clerk Dashboard](https://dashboard.clerk.com/). Select `Create Application` to create a new app. 2. In the dialog that appears, provide a name for your application and a few sign-in options. For this tutorial, we'll use `Email`, `Google` and `GitHub` as allowed sign-in methods. ### Retrieve your API keys From the `Configure` tab, click on **API Keys** to find your API keys, needed to authenticate your application with Clerk. Select the `Next.js` option to get them as environment variables for your Next.js project. It should look similar to this: ```bash NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=************** CLERK_SECRET_KEY=************** ``` Add these variables to the `.env` file in your Next.js project. ## Implementing the application ### Define your database connection and schema Create a `db` folder inside the `app/` directory. This is where we'll define the database schema and connection code. Now, add the file `app/db/index.ts` with the following content: ```typescript /// app/db/index.ts import { neon } from '@neondatabase/serverless'; import { drizzle } from 'drizzle-orm/neon-http'; import { UserMessages } from './schema'; if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL must be a Neon postgres connection string'); } const sql = neon(process.env.DATABASE_URL); export const db = drizzle(sql, { schema: { UserMessages }, }); ``` This exports a `db` instance that we can use to execute queries against the Neon database. Next, create a `schema.ts` file inside the `app/db` directory to define the database schema: ```typescript /// app/db/schema.ts import { pgTable, text, timestamp } from 'drizzle-orm/pg-core'; export const UserMessages = pgTable('user_messages', { user_id: text('user_id').primaryKey().notNull(), createTs: timestamp('create_ts').defaultNow().notNull(), message: text('message').notNull(), }); ``` This schema defines a table `user_messages` to store a message for each user, with the `user_id` provided by Clerk as the primary key. ### Generate and run migrations We'll use the `drizzle-kit` CLI tool to generate migrations for the schema we defined. To configure how it connects to the database, add a `drizzle.config.ts` file at the project root. ```typescript // drizzle.config.ts import { defineConfig } from 'drizzle-kit'; if (!process.env.DATABASE_URL) throw new Error('DATABASE_URL not found in environment'); export default defineConfig({ dialect: 'postgresql', schema: './app/db/schema.ts', dbCredentials: { url: process.env.DATABASE_URL! }, out: './drizzle', }); ``` Now, generate the migration files by running the following command: ```bash npx drizzle-kit generate ``` This will create a `drizzle` folder at the project root with the migration files. To apply the migration to the database, run: ```bash npx drizzle-kit push:pg ``` The `user_messages` table will now be visible in the Neon console. ### Add authentication middleware The `Clerk` sdk handles user authentication and session management for us. Create a new file `middleware.ts` in the root directory so all the app routes are protected by Clerk's authentication: ```typescript /// middleware.ts import { clerkMiddleware } from '@clerk/nextjs/server'; export default clerkMiddleware(); export const config = { matcher: [ // Skip Next.js internals and all static files, unless found in search params '/((?!_next|[^?]*\\.(?:html?|css|js(?!on)|jpe?g|webp|png|gif|svg|ttf|woff2?|ico|csv|docx?|xlsx?|zip|webmanifest)).*)', // Always run for API routes '/(api|trpc)(.*)', ], }; ``` Next, we wrap the full application with the `ClerkProvider` component, so all pages have access to the current session and user context. Replace the contents of the `app/layout.tsx` file with the following: ```tsx import type { Metadata } from 'next'; import { Inter } from 'next/font/google'; import './globals.css'; import { ClerkProvider, UserButton } from '@clerk/nextjs'; const inter = Inter({ subsets: ['latin'] }); export const metadata: Metadata = { title: 'Neon-Next-Clerk guide', description: 'Generated by create next app', }; export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return (
{children}
); } ``` This also adds a `UserButton` component to the layout, which displays the user's name and avatar when logged in. ### Add interactivity to the application Our application has a single page that lets the logged-in user store their favorite quote and displays it. We implement `Next.js` server action to handle the form submission and database interaction. Create a new file at `app/actions.ts` with the following content: ```typescript 'use server'; import { currentUser } from '@clerk/nextjs/server'; import { UserMessages } from './db/schema'; import { db } from './db'; import { redirect } from 'next/navigation'; import { eq } from 'drizzle-orm'; export async function createUserMessage(formData: FormData) { const user = await currentUser(); if (!user) throw new Error('User not found'); const message = formData.get('message') as string; await db.insert(UserMessages).values({ user_id: user.id, message, }); redirect('/'); } export async function deleteUserMessage() { const user = await currentUser(); if (!user) throw new Error('User not found'); await db.delete(UserMessages).where(eq(UserMessages.user_id, user.id)); redirect('/'); } ``` The `createUserMessage` function inserts a new message into the `user_messages` table, while `deleteUserMessage` removes the message associated with the current user. Next, we implement a minimal UI to interact with these functions. Replace the contents of the `app/page.tsx` file with the following: ```tsx import { createUserMessage, deleteUserMessage } from './actions'; import { db } from './db'; import { currentUser } from '@clerk/nextjs/server'; async function getUserMessage() { const user = await currentUser(); if (!user) throw new Error('User not found'); return db.query.UserMessages.findFirst({ where: (messages, { eq }) => eq(messages.user_id, user.id), }); } export default async function Home() { const existingMessage = await getUserMessage(); const ui = existingMessage ? (

{existingMessage.message}

) : (
); return (

{existingMessage ? 'Your quote is wonderful...' : 'Save an inspiring quote for yourself...'}

{ui}
); } ``` This implements a form with a single text field that lets the user input a quote, and submit it, whereby it gets stored in the database, associated with their `Clerk` user ID. If a quote is already stored, it displays it and provides a button to delete it. The `currentuser` hook from `@clerk/nextjs/server` provides the current user's information, which we use to interact with the database on their behalf. ## Running the application To start the application, run the following command: ```bash npm run dev ``` This will start the Next.js development server. Open your browser and navigate to `http://localhost:3000` to see the application in action. When running for the first time, you'll be prompted to sign in with Clerk. Once authenticated, you'll be able to visit the home page, add a quote, and see it displayed. ## Conclusion In this guide, we walked through setting up a simple Next.js application with user authentication using Clerk and a Neon Postgres database. We defined a database schema using Drizzle ORM, generated migrations, and interacted with the database to store and retrieve user data. Next, we can add more routes and features to the application. The Clerk middleware ensures that only authenticated users can access any app routes, and the `ClerkProvider` component provides the user context to each of them. To view and manage the users who authenticated with your application, you can navigate to the [Clerk Dashboard](https://dashboard.clerk.dev/). ## Source code You can find the source code for the application described in this guide on GitHub. - [Authentication flow with Clerk](https://github.com/neondatabase/guide-neon-next-clerk): Authenticate users of your Neon application with Clerk ## Resources For more information on the tools used in this guide, refer to the following documentation: - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) - [Drizzle ORM](https://orm.drizzle.team/) - [Clerk Authentication](https://clerk.com/) - [Next.js Documentation](https://nextjs.org/docs) --- # Source: https://neon.com/llms/guides-auth-okta.txt # Authenticate Neon Postgres application users with Okta > This document guides Neon users on configuring authentication for Postgres applications using Okta, detailing the steps to integrate Okta's identity management with Neon's database services. ## Source - [Authenticate Neon Postgres application users with Okta HTML](https://neon.com/docs/guides/auth-okta): The original HTML version of this documentation User authentication is critical for web applications, especially for apps internal to an organization. [Okta Workforce Indentity Cloud](https://www.okta.com/workforce-identity/) is an identity and access management platform for organizations, that provides authentication, authorization, and user management capabilities. In this guide, we'll walk through building a simple Next.js application using [Neon's](https://neon.tech) Postgres database, and add user authentication to it using [Okta](https://www.okta.com/). We will cover how to: - Set up a Next.js project with Okta for authentication - Create a Neon Postgres database and connect it to your application - Define a database schema using Drizzle ORM and generate migrations - Store and retrieve user data associated with Okta user IDs **Note**: Okta provides a different solution called [Customer Identity Cloud](https://www.okta.com/customer-identity/), powered by `Auth0`, to authenticate external customers for Saas applications. This guide focuses on the [Workforce Identity Cloud](https://www.okta.com/workforce-identity/) for internal applications. For an example guide using `Auth0`, refer to our [Auth0](https://neon.com/docs/guides/auth-auth0) guide. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - An [Okta](https://developer.okta.com/) administrator account for user authentication. Okta provides a free trial that you can use to set one up for your organization. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Initialize your Next.js project We will create a simple web app that lets you add a favorite quote to the home page, and edit it afterwards. Run the following command in your terminal to create a new `Next.js` project: ```bash npx create-next-app guide-neon-next-okta --typescript --eslint --tailwind --use-npm --no-src-dir --app --import-alias "@/*" ``` Now, navigate to the project directory and install the required dependencies: ```bash npm install @neondatabase/serverless drizzle-orm npm install -D drizzle-kit dotenv npm install next-auth@beta ``` We use the `@neondatabase/serverless` package as the Postgres client, and `drizzle-orm`, a lightweight typescript ORM, to interact with the database. We also use `dotenv` to manage environment variables and the `drizzle-kit` CLI tool for generating database migrations. For authentication, we'll use the `auth.js` library (aliased as v5 of the `next-auth` package), which provides a simple way to add authentication to Next.js applications. It comes with built-in support for Okta. Also, add a `.env.local` file to the root of your project, which we'll use to store Neon/Okta connection parameters: ```bash touch .env.local ``` ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select an existing project or click the **New Project** button to create a new one. 3. Choose the desired region and Postgres version for your project, then click **Create Project**. ### Retrieve your Neon database connection string You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Add this connection string to the `.env.local` file in your Next.js project. ```bash # .env.local DATABASE_URL=NEON_DB_CONNECTION_STRING ``` ## Configuring Okta for authentication ### Create an Okta application 1. Log in to your Okta developer account and navigate to the **Applications** section. Click the **Create App Integration** button. 2. Select **OIDC - OpenID Connect** as the sign-in method. 3. Select **Web Application** as the application type and click **Next**. 4. Provide a name for your application, e.g., "Neon Next Guide". 5. Set **Sign-in redirect URIs** to `http://localhost:3000/api/auth/callback/okta` and **Sign-out redirect URIs** to `http://localhost:3000`. 6. Click **Save** to create the application. ### Retrieve your Okta configuration From the application's **General** tab, find the **Client ID** and **Client SECRET**. Also note your Okta **Issuer URI**, which is the first part of your Okta account's URL, e.g., `https://dev-12345.okta.com`. If it isn't clear, visit the **Security > API** section from the sidebar in the console to find the **Issuer URI** and remove `/oauth2/default` from the end. Add these as environment variables to the `.env.local` file in your Next.js project: ```bash # .env.local AUTH_OKTA_ISSUER=YOUR_OKTA_ISSUER AUTH_OKTA_ID=YOUR_CLIENT_ID AUTH_OKTA_SECRET=YOUR_CLIENT_SECRET AUTH_SECRET=YOUR_SECRET ``` The last variable, `AUTH_SECRET`, is a random string used by `Auth.js` to encrypt tokens. Run the following command to generate one and add it to your `.env.local` file: ```bash npx auth secret ``` **Note**: If you set up an Okta organization account specifically for this guide, you might need to assign yourself to the created Okta application to test the authentication flow. Visit **Applications > Applications** from the sidebar and select the application you created. In the **Assignments** tab, click **Assign** and select your own user account. ## Implementing the application ### Define database connection and schema Create a `db` folder inside the `app/` directory. This is where we'll define the database schema and connection code. Now, add the file `app/db/index.ts` with the following content: ```typescript /// app/db/index.ts import { neon } from '@neondatabase/serverless'; import { drizzle } from 'drizzle-orm/neon-http'; import { UserMessages } from './schema'; if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL must be a Neon postgres connection string'); } const sql = neon(process.env.DATABASE_URL); export const db = drizzle(sql, { schema: { UserMessages }, }); ``` This exports a `db` instance that we can use to execute queries against the Neon database. Next, create a `schema.ts` file inside the `app/db` directory to define the database schema: ```typescript /// app/db/schema.ts import { pgTable, text, timestamp } from 'drizzle-orm/pg-core'; export const UserMessages = pgTable('user_messages', { user_id: text('user_id').primaryKey().notNull(), createTs: timestamp('create_ts').defaultNow().notNull(), message: text('message').notNull(), }); ``` This schema defines a table `user_messages` to store a message for each user, with the `user_id` provided by Auth0 as the primary key. ### Generate and run migrations We'll use the `drizzle-kit` CLI tool to generate migrations for the schema we defined. To configure how it connects to the database, add a `drizzle.config.ts` file at the project root. ```typescript /// drizzle.config.ts import type { Config } from 'drizzle-kit'; import * as dotenv from 'dotenv'; dotenv.config({ path: '.env.local' }); if (!process.env.DATABASE_URL) throw new Error('DATABASE_URL not found in environment'); export default { schema: './app/db/schema.ts', out: './drizzle', driver: 'pg', dbCredentials: { connectionString: process.env.DATABASE_URL, }, strict: true, } satisfies Config; ``` Now, generate the migration files by running the following command: ```bash npx drizzle-kit generate:pg ``` This will create a `drizzle` folder at the project root with the migration files. To apply the migration to the database, run: ```bash npx drizzle-kit push:pg ``` The `user_messages` table will now be visible in the Neon console. ### Configure Okta authentication Create a new file `auth.ts` in the root directory of the project and add the following content: ```typescript import NextAuth from 'next-auth'; import Okta from 'next-auth/providers/okta'; export const { handlers, signIn, signOut, auth } = NextAuth({ providers: [Okta], callbacks: { async session({ session, token }) { session.user.id = token.sub as string; return session; }, }, }); ``` This file initializes `Auth.js` with Okta as the authentication provider. It also defines a callback to set the `sub` claim from the Okta token as the session user ID. ### Implement authentication routes Create a new dynamic route at `app/api/auth/[...nextauth]/route.ts` with the following content: ```tsx /// app/api/auth/[...nextauth]/route.ts import { handlers } from '@/auth'; export const { GET, POST } = handlers; ``` This route file imports the authentication handlers from the `auth.ts` file that handle all auth-related requests — sign-in, sign-out, and redirect after authentication. The `auth` object exported from `./auth.ts` is the universal method we can use to interact with the authentication state in the application. For example, we add a **User information** bar to the app layout that indicates the current user's name and provides a sign-out button. Replace the contents of the `app/layout.tsx` file with the following: ```tsx import type { Metadata } from 'next'; import { Inter } from 'next/font/google'; import './globals.css'; import { auth } from '@/auth'; const inter = Inter({ subsets: ['latin'] }); export const metadata: Metadata = { title: 'Create Next App', description: 'Generated by create next app', }; async function UserInfoBar() { const session = await auth(); if (!session) { return null; } return (
Welcome, {session.user?.name}!{' '} Sign out
); } export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return ( {children} ); } ``` ### Add interactivity to the application Our application has a single page that lets the logged-in user store their favorite quote and display it. We implement `Next.js` server actions to handle the form submission and database interaction. Create a new file at `app/actions.ts` with the following content: ```typescript /// app/actions.ts 'use server'; import { auth } from '@/auth'; import { UserMessages } from './db/schema'; import { db } from './db'; import { redirect } from 'next/navigation'; import { eq } from 'drizzle-orm'; export async function createUserMessage(formData: FormData) { const session = await auth(); if (!session) throw new Error('User not authenticated'); const message = formData.get('message') as string; await db.insert(UserMessages).values({ user_id: session.user?.id as string, message, }); redirect('/'); } export async function deleteUserMessage() { const session = await auth(); if (!session) throw new Error('User not authenticated'); await db.delete(UserMessages).where(eq(UserMessages.user_id, session.user?.id as string)); redirect('/'); } ``` The `createUserMessage` function inserts a new message into the `user_messages` table, while `deleteUserMessage` removes the message associated with the current user. Next, we implement a minimal UI to interact with these functions. Replace the contents of the `app/page.tsx` file with the following: ```tsx /// app/page.tsx import { createUserMessage, deleteUserMessage } from './actions'; import { db } from './db'; import { auth } from '@/auth'; async function getUserMessage() { const session = await auth(); if (!session) return null; return db.query.UserMessages.findFirst({ where: (messages, { eq }) => eq(messages.user_id, session.user?.id as string), }); } function LoginBox() { return (
Log in
); } export default async function Home() { const session = await auth(); const existingMessage = await getUserMessage(); if (!session) { return ; } const ui = existingMessage ? (

{existingMessage.message}

) : (
); return (

{existingMessage ? 'Your quote is wonderful...' : 'Save an inspiring quote for yourself...'}

{ui}
); } ``` This code implements a form with a single text field that lets the user input a quote, and submit it, whereby the quote is stored in the database and associated with the user's `Okta` user ID. If a quote is already stored, it displays the quote and provides a button to delete it. The `user.id` property set on the session object provides the current user's ID, which we use to interact with the database on their behalf. If the user is not authenticated, the page displays a login button instead. ## Running the application To start the application, run the following command: ```bash npm run dev ``` This will start the Next.js development server. Open your browser and navigate to `http://localhost:3000` to see the application in action. When running for the first time, you'll see a `Log In` button which will redirect you to the `Auth.js` widget, prompting you to sign in with Okta. Once authenticated, you'll be able to visit the home page, add a quote, and see it displayed. ## Conclusion In this guide, we walked through setting up a simple Next.js application with user authentication using Okta and a Neon Postgres database. We defined a database schema using Drizzle ORM, generated migrations, and interacted with the database to store and retrieve user data. Next, we can add more routes and features to the application. The `auth` method can be used in the Next.js API routes or middleware to protect endpoints that require authentication. To view and manage the users who authenticated with your application, you can navigate to your Okta admin console and view the **Directory > People** section in the sidebar. ## Source code You can find the source code for the application described in this guide on GitHub. - [Authentication flow with Okta](https://github.com/neondatabase/guide-neon-next-okta): Authenticate Neon application users with Okta ## Resources For more information on the tools used in this guide, refer to the following documentation: - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) - [Drizzle ORM](https://orm.drizzle.team/) - [Next.js Documentation](https://nextjs.org/docs) - [Auth.js Documentation](https://authjs.dev/getting-started/installation) --- # Source: https://neon.com/llms/guides-autoscaling-algorithm.txt # Understanding Neon's autoscaling algorithm > The document explains Neon's autoscaling algorithm, detailing how it dynamically adjusts resources based on workload demands to optimize performance and efficiency within the Neon database environment. ## Source - [Understanding Neon's autoscaling algorithm HTML](https://neon.com/docs/guides/autoscaling-algorithm): The original HTML version of this documentation What you will learn: - Key metrics that drive autoscaling decisions - How often the algorithm checks these metrics Related topics: - [Introduction to autoscaling](https://neon.com/docs/introduction/autoscaling) - [Enabling autoscaling](https://neon.com/docs/guides/autoscaling-guide) The key concept behind autoscaling is that compute resizing happens _automatically_ — once you set up your minimum and maximum [compute sizes](https://neon.com/docs/manage/endpoints#how-to-size-your-compute), there's no action required on your part other than [monitoring](https://neon.com/docs/introduction/monitoring-page) your usage metrics to see if adjustments are needed. That said, it can be helpful to understand exactly when and under what circumstances the algorithm optimizes your database on two key fronts — **performance** and **efficiency**. In a nutshell, the algorithm automatically **scales up** your compute to ensure optimal performance and **scales down** to maximize efficiency. ## How the algorithm works Neon's autoscaling algorithm uses two components, the [vm-monitor](https://neon.com/docs/reference/glossary#vm-monitor) and the [autoscaler-agent](https://neon.com/docs/reference/glossary#autoscaler-agent), to continuously monitor three key metrics: your average CPU load, your memory usage, and the activity of your [Local File Cache (LFC)](https://neon.com/docs/reference/glossary#local-file-cache). These metrics determine how your compute resources — the virtual machine that powers your database — should be scaled to maintain performance and efficiency. ### The Formula In essence, the algorithm is built on **goals**. We set a goal (an ideal compute size) for each of the three key metrics: - **`cpuGoalCU`** — Keep the 1-minute average CPU load at or below 90% of the available CPU capacity. - **`memGoalCU`** — Keep memory usage at or below 75% of the total allocated RAM. - **`lfcGoalCU`** — Fit your frequently accessed working set within 75% of the compute's RAM allocated to the LFC. The formula can be expressed as: ``` goalCU := max(cpuGoalCU, memGoalCU, lfcGoalCU) ``` The algorithm selects the highest value from these goals as the overall `goalCU`, ensuring your database has enough resources to handle the most demanding metric — while staying within the minimum and maximum limits you've set. ### The Metrics Let's go into a bit more detail about each metric. #### CPU load average The CPU load average is a measure of how much work your CPU is handling. Every 5 seconds, the autoscaler-agent checks the 1-minute load average from the virtual machine (VM) running your database. This load average reflects the average number of processes waiting to be executed by the vCPU over the previous minute. The goal is to keep the CPU load at or below 90% of the available vCPU capacity. If the load exceeds this threshold, the algorithm increases the compute allocated to your database to handle the additional demand. In simpler terms, if your database is working too hard, the algorithm adds more CPU power to keep things running smoothly. #### Memory Usage Memory usage refers to the amount of RAM your database and its related processes are using. Every 5 seconds, the autoscaler-agent checks for the latest memory metrics from inside the VM, and every 100ms the vm-monitor checks memory usage from Postgres. The algorithm aims to keep overall memory usage at or below 75% of the total allocated memory. If your database starts using more memory than this threshold, the algorithm increases compute size to allocate more memory, making sure your database has enough RAM to perform well without over-provisioning. #### Local File Cache (LFC) working set size An important part of the scaling algorithm is estimating your current working set size — a subset of your most frequently accessed data — and scaling your compute to ensure it fits within the LFC. Every 20 seconds, the autoscaler-agent checks the working set size across a variety of time windows, ranging from 1 to 60 minutes. The goal is to fit your working set within 75% of the compute's RAM allocated to the LFC. If your working set exceeds this threshold, the algorithm increases compute size to expand the LFC, keeping frequently accessed data in memory for faster access. To learn more about how we do this, see [Dynamically estimating and scaling Postgres' working set size](https://neon.com/blog/dynamically-estimating-and-scaling-postgres-working-set-size). **Note**: If your dataset is small enough, you can improve performance by keeping the entire dataset in memory. Check your database size on the Monitoring [dashboard](https://neon.com/docs/introduction/monitoring-page#database-size) and adjust your minimum compute size accordingly. For example, a 6.4 GB database can comfortably fit within a compute size of 2 vCPU with 8 GB of RAM (where the LFC can use up to 75% of the available RAM). ## How often the metrics are polled To give you a sense of the algorithm's responsiveness, here's a summary of how often the metrics are polled: - **Every 5 seconds** → the autoscaler-agent fetches load metrics from the VM, including CPU usage and overall memory usage. - **Every 20 seconds** → the autoscaler-agent checks the Local File Cache (LFC) metrics, including the working set size across various time windows: 1 minute, 2 minutes, up to 60 minutes. - **Every 100 milliseconds** → the vm-monitor checks memory usage specifically within Postgres. This frequent polling allows the algorithm to respond swiftly to changes in workload, ensuring that your compute resources are always appropriately scaled to meet current demands. --- # Source: https://neon.com/llms/guides-autoscaling-guide.txt # Enable Autoscaling in Neon > The document "Enable Autoscaling in Neon" guides users through configuring and managing autoscaling settings for their Neon databases, ensuring optimal resource allocation and performance. ## Source - [Enable Autoscaling in Neon HTML](https://neon.com/docs/guides/autoscaling-guide): The original HTML version of this documentation What you will learn: - Enable autoscaling for a compute - Configure autoscaling defaults for your project Related topics: - [About autoscaling](https://neon.com/docs/introduction/autoscaling) - [How the algorithm works](https://neon.com/docs/guides/autoscaling-algorithm) This guide demonstrates how to enable autoscaling in your Neon project and how to [visualize](https://neon.com/docs/guides/autoscaling-guide#monitor-autoscaling) your usage. **Tip** Did you know?: Neon's autoscaling feature instantly scales your compute and memory resources. **No manual intervention or restarts are required.** ## Enable autoscaling for a compute You can edit an individual compute to alter the compute configuration, which includes autoscaling. To edit a compute: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, identify the compute you want to configure and click **Edit**. 1. On the **Edit compute** drawer, select **Autoscale** and use the slider to specify a minimum and maximum compute size. Neon scales the compute size up and down within the specified range to meet workload demand. Autoscaling currently supports a range of 1/4 (.25) to 16 vCPUs. One vCPU has 4 GB of RAM, 2 vCPUs have 8 GB of RAM, and so on. The amount of RAM in GB is always 4 times the number of vCPUs. For an overview of available compute sizes, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). Please note that when the autoscaling maximum is > 10, the autoscaling minimum must be ≥ (max / 8). **Note**: You can configure the scale to zero setting for your compute at the same time. For more, see [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero). 1. Click **Save**. ## Configure autoscaling defaults for your project You can configure autoscaling configuration defaults for your project so that **newly created computes** (including those created when you create a new branch or add read replica) are created with the same autoscaling configuration. This saves you from having to configure autoscaling settings with each new compute. See [Change your project's default compute settings](https://neon.com/docs/manage/projects#change-your-projects-default-compute-settings) for more detail. **Note**: Changing your autoscaling default settings does not alter the autoscaling configuration for existing computes. To configure autoscaling defaults: 1. Navigate to your Project Dashboard and select **Settings** from the sidebar. 2. Select **Compute**. 3. Select **Change** to open the **Change default compute settings** modal. 4. Use the slider to specify a minimum and maximum compute size and **Save** your changes. The next time you create a compute, these settings will be applied to it. ### Autoscaling defaults for each Neon plan The following table outlines the initial default autoscaling settings for newly created projects on each Neon plan. | **Neon plan** | **Minimum compute size** | **Maximum compute size** | | ------------- | ------------------------ | ------------------------ | | Free | 0.25 | 2 | | Launch | 1 | 4 | | Scale | 1 | 8 | | Business | 1 | 8 | ## Monitor autoscaling From the Neon Console, you can view how your vCPU and RAM usage have scaled for the past 24 hours. On the **Project Dashboard** page, navigate down the page to the **Monitoring** section. Some key points about this Autoscaling graph: - **Allocated** refers to the vCPU and memory size provisioned to handle current demand; autoscaling automatically adjusts this allocation, increasing or decreasing the allocated vCPU and memory size in a step-wise fashion as demand fluctuates, within your minimum and maximum limits. - **VCPU Usage** is represented by the green line - **RAM usage** is represented by the blue line. - A re-activated compute scales up immediately to your minimum allocation, ensuring adequate performance for your anticipated demand. Place your cursor anywhere in the graph to get more usage detail about that particular point in time. See below for some rules of thumb on actions you might want to take based on trends you see in this view. ### Start with a good minimum Ideally, for smaller datasets, you want to keep as much of your dataset in memory (RAM) as possible. This improves performance by minimizing I/O operations. We recommend setting a large enough minimum limit to fit your full dataset in memory. For larger datasets and more sizing advice, see [how to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). ### Setting your maximum If your autoscaling graphs show regular spikes that hit your maximum setting, consider increasing your maximum. However, because these spikes plateau at the maximum setting, it can be difficult to determine your actual demand. Another approach is to set a higher threshold than you need and monitor usage spikes to get a sense of where your typical maximum demand reaches; you can then throttle the maximum setting down closer to anticipated/historical demand. Either way, with autoscaling you only use what's necessary; a higher setting does not translate to increased usage unless there's demand for it. ### The neon_utils extension Another tool for understanding usage, the `neon_utils` extension provides a `num_cpus()` function that helps you monitor how the _Autoscaling_ feature allocates compute resources in response to workload. For more information, see [The neon_utils extension](https://neon.com/docs/extensions/neon-utils). --- # Source: https://neon.com/llms/guides-aws-lambda.txt # Connect from AWS Lambda > The document outlines the steps for connecting an AWS Lambda function to a Neon database, detailing the necessary configurations and code examples for seamless integration. ## Source - [Connect from AWS Lambda HTML](https://neon.com/docs/guides/aws-lambda): The original HTML version of this documentation AWS Lambda is a serverless, event-driven compute service that allows you to run code without provisioning or managing servers. It is a convenient and cost-effective solution for running various types of workloads, including those that require a database. This guide describes how to set up a Neon database and connect to it from an AWS Lambda function using Node.js as the runtime environment. It covers: - Creating a Lambda function using the [Serverless Framework](https://www.serverless.com/), which is a serverless application lifecycle management framework. - Connecting your Lambda function to a Neon database. - Deploying the Lambda function to AWS. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://neon.com/docs/get-started/signing-up/) for instructions. - An AWS account. You can create a free AWS account at [AWS Free plan](https://aws.amazon.com/free/). An [IAM User and Access Key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) are required to programmatically interact with your AWS account. You must provide these credentials when deploying the Serverless Framework project. - A Service Framework account. You can sign up at [Serverless Framework](https://www.serverless.com/). ## Create a Neon project If you do not have one already, create a Neon project: 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a table in Neon To create a table, navigate to the **SQL Editor** in the [Neon Console](https://console.neon.tech/): In the SQL Editor, run the following queries to create a `users` table and insert some data: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW() ); INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com'), ('Bob', 'bob@example.com'), ('Charlie', 'charlie@example.com'), ('Dave', 'dave@example.com'), ('Eve', 'eve@example.com'); ``` ## Create a Lambda function Create the Lambda function using the [Serverless Framework](https://www.serverless.com/): 1. Install the Serverless Framework by running the following command: ```bash npm install -g serverless ``` 2. Create a `my-lambda` project directory and navigate to it. ```bash mkdir neon-lambda cd neon-lambda ``` 3. Run the **serverless** command to create a serverless project. ```bash serverless ``` Follow the prompts, as demonstrated below. You will be required to provide your AWS account credentials. The process creates an `aws-node-project` directory. ```bash ? What do you want to make? AWS - Node.js - Starter ? What do you want to call this project? aws-node-project ✔ Project successfully created in aws-node-project folder ? Do you want to login/register to Serverless Dashboard? Yes Logging into the Serverless Dashboard via the browser If your browser does not open automatically, please open this URL: https://app.serverless.com?client=cli&transactionId=jP-Zz5A9xu67PPYqzIhOe ✔ You are now logged into the Serverless Dashboard ? What application do you want to add this to? [create a new app] ? What do you want to name this application? aws-node-project ✔ Your project is ready to be deployed to Serverless Dashboard (org: "myorg", app: "aws-node-project") ? No AWS credentials found, what credentials do you want to use? AWS Access Role (most secure) If your browser does not open automatically, please open this URL: https://app.serverless.com/myorg/settings/providers?source=cli&providerId=new&provider=aws To learn more about providers, visit: http://slss.io/add-providers-dashboard ? [If you encountered an issue when setting up a provider, you may press Enter to skip this step] ✔ AWS Access Role provider was successfully created ? Do you want to deploy now? Yes Deploying aws-node-project to stage dev (us-east-1, "default" provider) ✔ Service deployed to stack aws-node-project-dev (71s) dashboard: https://app.serverless.com/myorg/apps/my-aws-node-project/aws-node-project/dev/us-east-1 functions: hello: aws-node-project-dev-hello (225 kB) What next? Run these commands in the project directory: serverless deploy Deploy changes serverless info View deployed endpoints and resources serverless invoke Invoke deployed functions serverless --help Discover more commands ``` 4. Navigate to the `aws-node-project` directory created by the previous step and install the `node-postgres` package, which you will use to connect to the database. ```bash npm install pg ``` After installing the `node-postgres` package, the following dependency should be defined in your `package.json` file: ```json { "dependencies": { "pg": "^8.13.1" } } ``` 5. In the `aws-node-project` directory, add a `users.js` file, and add the following code to it: ```javascript 'use strict'; const { Client } = require('pg'); let client; module.exports.getAllUsers = async () => { if (!client) { console.log('Initializing new database client'); client = new Client({ connectionString: process.env.DATABASE_URL }); try { await client.connect(); } catch (error) { console.error('Error connecting to the database:', error); return { statusCode: 500, body: JSON.stringify({ error: 'Failed to connect to the database', }), }; } } try { const { rows } = await client.query('SELECT * FROM users'); return { statusCode: 200, body: JSON.stringify({ data: rows, }), }; } catch (error) { console.error('Error executing query:', error); return { statusCode: 500, body: JSON.stringify({ error: 'Failed to fetch users', }), }; } }; ``` The code in the `users.js` file exports the `getAllUsers` function, which retrieves all rows from the `users` table and returns them as a `JSON` object in the `HTTP` response body. This function uses the `pg` library to connect to the Neon database. It creates a new `Client` instance and passes the database connection string, which is defined in the `DATABASE_URL` environment variable. It then calls `connect()` to establish a connection to the database. Finally, it uses the `query()` method to execute a `SELECT` statement that retrieves all rows from the `users` table. The query method returns a `Promise` that resolves to an object containing the rows retrieved by the `SELECT` statement, which the function parses to retrieve the `rows` property. Finally, the function returns an `HTTP` response with a status code of 200 and a body that contains a `JSON` object with a single `data` property, which is set to the value of the rows variable. 6. Add the `DATABASE_URL` environment variable and the function definition to the `serverless.yml` file, which is located in your `aws-node-project` directory. **Note**: Environment variables can also be added to a `.env` file and loaded automatically with the help of the [dotenv](https://www.npmjs.com/package/dotenv) package. For more information, see [Resolution of environment variables](https://www.serverless.com/framework/docs/environment-variables). You can find your database connection details by clicking the **Connect** button on your **Project Dashboard**. Add the `DATABASE_URL` under `environment`, and add `sslmode=require&channel_binding=require` to the end of the connection string to enable SSL. The `sslmode=require` option tells Postgres to use SSL encryption and verify the server's certificate. ```yaml provider: name: aws runtime: nodejs14.x environment: DATABASE_URL: postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require functions: getAllUsers: handler: users.getAllUsers events: - httpApi: path: /users method: get ``` 7. Deploy the serverless function using the following command: ```bash serverless deploy ``` The `serverless deploy` command generates an API endpoint using [API Gateway](https://www.serverless.com/framework/docs/providers/aws/events/http-api). The output of the command appears similar to the following: ```bash Deploying aws-node-project to stage dev (us-east-1, "default" provider) ✔ Service deployed to stack aws-node-project-dev (60s) dashboard: https://app.serverless.com/myorg/apps/aws-node-project/aws-node-project/dev/us-east-1 endpoint: GET - https://ge3onb0klj.execute-api.us-east-1.amazonaws.com/users functions: getAllUsers: aws-node-project-dev-getAllUsers (225 kB) ``` 8. Test the generated endpoint by running a cURL command. For example: ```bash curl https://eg3onb0jkl.execute-api.us-east-1.amazonaws.com/users | jq ``` The response returns the following data: ```bash { "data": [ { "id": 1, "name": "Alice", "email": "alice@example.com", "created_at": "2023-01-10T17:46:29.353Z" }, { "id": 2, "name": "Bob", "email": "bob@example.com", "created_at": "2023-01-10T17:46:29.353Z" }, { "id": 3, "name": "Charlie", "email": "charlie@example.com", "created_at": "2023-01-10T17:46:29.353Z" }, { "id": 4, "name": "Dave", "email": "dave@example.com", "created_at": "2023-01-10T17:46:29.353Z" }, { "id": 5, "name": "Eve", "email": "eve@example.com", "created_at": "2023-01-10T17:46:29.353Z" } ] } ``` ## Enabling CORS If you make API calls to the Lambda function from your app, you will likely need to configure Cross-Origin Resource Sharing (CORS). Visit the AWS documentation for information about [how to enable CORS in API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-cors.html). You can run the following command to enable CORS to your local development environment: ```bash aws apigatewayv2 update-api --api-id --cors-configuration AllowOrigins="http://localhost:3000" ``` You can find your `api-id` on the API Gateway dashboard: ## Conclusion In this guide, you have learned how to set up a Postgres database using Neon and connect to it from an AWS Lambda function using Node.js as the runtime environment. You have also learned how to use Serverless Framework to create and deploy the Lambda function, and how to use the `pg` library to perform a basic database read operations. --- # Source: https://neon.com/llms/guides-aws-s3.txt # File storage with AWS S3 > The document outlines the process for integrating AWS S3 file storage with Neon, detailing configuration steps and best practices for managing data storage within the Neon environment. ## Source - [File storage with AWS S3 HTML](https://neon.com/docs/guides/aws-s3): The original HTML version of this documentation [Amazon Simple Storage Service (AWS S3)](https://aws.amazon.com/s3/) is an object storage service widely used for storing and retrieving large amounts of data, such as images, videos, backups, and application assets. This guide demonstrates how to integrate AWS S3 with Neon by storing file metadata (like the object key and URL) in your Neon database, while using S3 for file storage. ## Setup steps ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create an AWS account and S3 bucket 1. Sign up for or log in to your [AWS Account](https://aws.amazon.com/). 2. Navigate to the **S3** service in the AWS Management Console. 3. Click **Create bucket**. Provide a unique bucket name (e.g., `my-neon-app-s3-uploads`), select an AWS Region (e.g., `us-east-1`), and configure initial settings. 4. **Public Access (for this example):** For simplicity in accessing uploaded files via URL in this guide, we'll configure the bucket to allow public read access _for objects uploaded with specific permissions_. Under **Block Public Access settings for this bucket**, _uncheck_ "Block all public access". Acknowledge the warning. **Note** Public buckets: Making buckets or objects publicly readable carries security risks. For production applications, it's strongly recommended to: 1. Keep buckets **private** (Block all public access enabled). 2. Use **presigned URLs** not only for uploads but also for *downloads* (temporary read access). This guide uses public access for simplicity, but you should implement secure access controls in production. 5. After the bucket is created, navigate to the **Permissions** tab. Under **Bucket Policy**, you can set up a policy to allow public read access to objects. For example: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-neon-app-s3-uploads/*" } ] } ``` Replace `my-neon-app-s3-uploads` with your actual bucket name. 6. **Create IAM user for programmatic access:** - Navigate to the **IAM** service in the AWS Console. - Go to **Users** and click **Add users**. - Enter a username (e.g., `neon-app-s3-user`). Select **Access key - Programmatic access** as the credential type. Click **Next: Permissions**. - Choose **Attach policies directly**. Search for and select `AmazonS3FullAccess`. - Click **Next**, then **Create user**. - Click on **Create access key**. - Click **Other** > **Create access key**. Copy the **Access key ID** and **Secret access key**. These will be used in your application to authenticate with AWS S3. ## Configure CORS for client-side uploads If your application involves uploading files **directly from a web browser** using the generated presigned URLs, you must configure Cross-Origin Resource Sharing (CORS) on your S3 bucket. CORS rules tell S3 which web domains are allowed to make requests (like `PUT` requests for uploads) to your bucket. Without proper CORS rules, browser security restrictions will block these direct uploads. In your S3 bucket settings, navigate to the **Permissions** tab and find the **CORS configuration** section. Add the following CORS rules: ```json [ { "AllowedHeaders": ["*"], "AllowedMethods": ["GET", "PUT"], "AllowedOrigins": ["*"], "ExposeHeaders": [], "MaxAgeSeconds": 9000 } ] ``` > This configuration allows any origin (`*`) to perform `GET` and `PUT` requests. In a production environment, you should restrict `AllowedOrigins` to your application's domain(s) for security. ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the objects uploaded to S3. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Create a table including the object key, URL, user ID, and timestamp: ```sql CREATE TABLE IF NOT EXISTS s3_files ( id SERIAL PRIMARY KEY, object_key TEXT NOT NULL UNIQUE, -- Key (path/filename) in S3 file_url TEXT NOT NULL, -- Publicly accessible URL (if object is public) user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. Add other relevant columns as needed (e.g., `content_type`, `size`). **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `s3_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the objects within the S3 bucket itself is managed via S3 bucket policies, IAM permissions, and object ACLs. ## Upload files to S3 and store metadata in Neon The recommended pattern for client-side uploads to S3 involves **presigned upload URLs**. Your backend generates a temporary URL that the client uses to upload the file directly to S3. Afterwards, your backend saves the file's metadata to Neon. This requires two backend endpoints: 1. `/presign-upload`: Generates the temporary presigned URL. 2. `/save-metadata`: Records the metadata in Neon after the client confirms successful upload. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, [`@aws-sdk/client-s3`](https://www.npmjs.com/package/@aws-sdk/client-s3) and [`@aws-sdk/s3-request-presigner`](https://www.npmjs.com/package/@aws-sdk/s3-request-presigner) for S3 interaction, and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file: ```env # AWS S3 Credentials & Config AWS_ACCESS_KEY_ID=your_iam_user_access_key_id AWS_SECRET_ACCESS_KEY=your_iam_user_secret_access_key AWS_REGION=your_s3_bucket_region # e.g., us-east-1 S3_BUCKET_NAME=your_s3_bucket_name # e.g., my-neon-app-s3-uploads # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; import { randomUUID } from 'crypto'; const S3_BUCKET = process.env.S3_BUCKET_NAME; const AWS_REGION = process.env.AWS_REGION; const s3 = new S3Client({ region: AWS_REGION, credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, }, }); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. const authMiddleware = async (c, next) => { c.set('userId', 'user_123'); await next(); }; // 1. Generate Presigned URL for Upload app.post('/presign-upload', authMiddleware, async (c) => { try { const { fileName, contentType } = await c.req.json(); if (!fileName || !contentType) throw new Error('fileName and contentType required'); const objectKey = `${randomUUID()}-${fileName}`; const publicFileUrl = `https://${S3_BUCKET}.s3.${AWS_REGION}.amazonaws.com/${objectKey}`; const command = new PutObjectCommand({ Bucket: S3_BUCKET, Key: objectKey, ContentType: contentType, }); const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); return c.json({ success: true, presignedUrl, objectKey, publicFileUrl }); } catch (error) { console.error('Presign Error:', error.message); return c.json({ success: false, error: 'Failed to prepare upload' }, 500); } }); // 2. Save Metadata after Client Upload Confirmation app.post('/save-metadata', authMiddleware, async (c) => { try { const { objectKey, publicFileUrl } = await c.req.json(); const userId = c.get('userId'); if (!objectKey) throw new Error('objectKey required'); await sql` INSERT INTO s3_files (object_key, file_url, user_id) VALUES (${objectKey}, ${publicFileUrl}, ${userId}) `; console.log(`Metadata saved for S3 object: ${objectKey}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes the Neon database client (`sql`), Hono (`app`), and the AWS S3 client (`s3`) configured with region and credentials. 2. **Authentication:** A placeholder `authMiddleware` is included. **Crucially**, this needs to be replaced with real authentication logic. It currently just sets a static `userId` for demonstration. 3. **Upload endpoints:** - **`/presign-upload`:** Generates a temporary secure URL (`presignedUrl`) using `@aws-sdk/s3-request-presigner` that allows uploading a file directly to S3. It returns the URL, the generated `objectKey`, and the standard S3 public URL. - **`/save-metadata`:** Called by the client _after_ successful upload. Saves the `objectKey`, `file_url`, and `userId` into the `s3_files` table in Neon using `@neondatabase/serverless`. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) (AWS SDK for Python), and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask boto3 psycopg2-binary python-dotenv ``` Create a `.env` file: ```env # AWS S3 Credentials & Config AWS_ACCESS_KEY_ID=your_iam_user_access_key_id AWS_SECRET_ACCESS_KEY=your_iam_user_secret_access_key AWS_REGION=your_s3_bucket_region # e.g., us-east-1 S3_BUCKET_NAME=your_s3_bucket_name # e.g., my-neon-app-s3-uploads # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import uuid import boto3 import psycopg2 from botocore.exceptions import ClientError from dotenv import load_dotenv from flask import Flask, jsonify, request load_dotenv() S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME") AWS_REGION = os.getenv("AWS_REGION") s3_client = boto3.client( service_name="s3", region_name=AWS_REGION, aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"), aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"), ) app = Flask(__name__) # Use a global PostgreSQL connection instead of creating a new one for each request in production def get_db_connection(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate Presigned URL for Upload @app.route("/presign-upload", methods=["POST"]) def presign_upload_route(): try: user_id = get_authenticated_user_id(request) data = request.get_json() file_name = data.get("fileName") content_type = data.get("contentType") if not file_name or not content_type: raise ValueError("fileName and contentType required") object_key = f"{uuid.uuid4()}-{file_name}" public_file_url = ( f"https://{S3_BUCKET_NAME}.s3.{AWS_REGION}.amazonaws.com/{object_key}" ) presigned_url = s3_client.generate_presigned_url( "put_object", Params={ "Bucket": S3_BUCKET_NAME, "Key": object_key, "ContentType": content_type, }, ExpiresIn=300, ) return jsonify( { "success": True, "presignedUrl": presigned_url, "objectKey": object_key, "publicFileUrl": public_file_url, } ), 200 except (ClientError, ValueError) as e: print(f"Presign Error: {e}") return jsonify( {"success": False, "error": f"Failed to prepare upload: {e}"} ), 500 except Exception as e: print(f"Unexpected Presign Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 # 2. Save Metadata after Client Upload Confirmation @app.route("/save-metadata", methods=["POST"]) def save_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) data = request.get_json() object_key = data.get("objectKey") public_file_url = data.get("publicFileUrl") if not object_key: raise ValueError("objectKey required") conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO s3_files (object_key, file_url, user_id) VALUES (%s, %s, %s) """, (object_key, public_file_url, user_id), ) conn.commit() print(f"Metadata saved for S3 object: {object_key}") return jsonify({"success": True}), 201 except (psycopg2.Error, ValueError) as e: print(f"Metadata Save Error: {e}") return jsonify( {"success": False, "error": "Failed to save metadata"} ), 500 except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": port = int(os.environ.get("PORT", 3000)) app.run(host="0.0.0.0", port=port, debug=True) ``` **Explanation** 1. **Setup:** Initializes Flask, the PostgreSQL client (`psycopg2`), and the AWS S3 client (`boto3`) using environment variables for credentials and configuration. 2. **Authentication:** A placeholder `get_authenticated_user_id` function is included. **Replace this with real authentication logic.** 3. **Upload endpoints:** - **`/presign-upload`:** Generates a temporary secure URL (`presignedUrl`) using `boto3` that allows uploading a file directly to S3. It returns the URL, `objectKey`, and the standard public S3 URL. - **`/save-metadata`:** Called by the client _after_ successful upload. Saves the `objectKey`, `file_url`, and `userId` into the `s3_files` table in Neon using `psycopg2`. 4. In production, you should use a global PostgreSQL connection instead of creating a new one for each request. This is important for performance and resource management. ## Testing the upload workflow Testing the presigned URL flow involves multiple steps: 1. **Get presigned URL:** Send a `POST` request to your `/presign-upload` endpoint with a JSON body containing `fileName` and `contentType`. **Using cURL:** ```bash curl -X POST http://localhost:3000/presign-upload \ -H "Content-Type: application/json" \ -d '{"fileName": "test-s3.txt", "contentType": "text/plain"}' ``` You should receive a JSON response with a `presignedUrl`, `objectKey`, and `publicFileUrl`: ```json { "success": true, "presignedUrl": "https://.s3.us-east-1.amazonaws.com/.....&x-id=PutObject", "objectKey": "", "publicFileUrl": "https://.s3.us-east-1.amazonaws.com/" } ``` Note the `presignedUrl`, `objectKey`, and `publicFileUrl` from the response. You will use these in the next steps. 2. **Upload file to S3:** Use the received `presignedUrl` to upload the actual file using an HTTP `PUT` request. **Using cURL:** ```bash curl -X PUT "" \ --upload-file /path/to/your/test-s3.txt \ -H "Content-Type: text/plain" ``` A successful upload typically returns HTTP `200 OK` with no body. 3. **Save metadata:** Send a `POST` request to your `/save-metadata` endpoint with the `objectKey` and `publicFileUrl` obtained in step 1. **Using cURL:** ```bash curl -X POST http://localhost:3000/save-metadata \ -H "Content-Type: application/json" \ -d '{"objectKey": "", "publicFileUrl": ""}' ``` You should receive a JSON response indicating success: ```json { "success": true } ``` **Expected outcome:** - The file appears in your S3 bucket (check the AWS Console). - A new row appears in your `s3_files` table in Neon containing the `object_key` and `file_url`. You can now integrate API calls to these endpoints from various parts of your application (e.g., web clients using JavaScript's `fetch` API, mobile apps, backend services) to handle file uploads. ## Accessing file metadata and files Storing metadata in Neon allows your application to easily retrieve references to the files hosted on S3. Query the `s3_files` table from your application's backend when needed. **Example SQL query:** Retrieve files for user 'user_123': ```sql SELECT id, object_key, -- Key (path/filename) in S3 file_url, -- Publicly accessible S3 URL user_id, -- User associated with the file upload_timestamp FROM s3_files WHERE user_id = 'user_123'; -- Use actual authenticated user ID ``` **Using the data:** - The query returns metadata stored in Neon. - The `file_url` column contains the direct link to access the file via S3. - Use this `file_url` in your application (e.g., `` tags, download links) **Note** Private buckets: For private S3 buckets, store only the `object_key` and generate presigned *read* URLs on demand using a similar backend process. This pattern effectively separates file storage and delivery concerns (handled by S3) from structured metadata management (handled by Neon), leveraging the strengths of both services. ## Resources - [AWS S3 documentation](https://docs.aws.amazon.com/s3/index.html) - [AWS — Sharing objects with presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-azure-blob-storage.txt # File storage with Azure Blob Storage > The document details the integration of Azure Blob Storage with Neon, outlining steps to configure and manage file storage, enabling efficient data handling within the Neon environment. ## Source - [File storage with Azure Blob Storage HTML](https://neon.com/docs/guides/azure-blob-storage): The original HTML version of this documentation [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) is Microsoft's object storage solution for the cloud. It's optimized for storing massive amounts of unstructured data, such as text or binary data, including images, documents, streaming media, and archive data. This guide demonstrates how to integrate Azure Blob Storage with Neon by storing file metadata (like the blob name and URL) in your Neon database, while using Azure Blob Storage for file storage. ## Prerequisites ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create an Azure account, storage account, and container 1. Sign up for or log in to your [Azure Account](https://azure.microsoft.com/free/). 2. Navigate to [Storage accounts](https://portal.azure.com/#create/Microsoft.StorageAccount) in the Azure portal. 3. Click **+ Create**. Fill in the required details: select a Subscription, create or select a Resource group, provide a unique Storage account name (e.g., `myneonappblobstorage`), choose a Region (e.g., `East US`), and select performance/redundancy options (Standard/LRS is fine for this example). Click **Review + create**, then **Create**. 4. Once the storage account is deployed, go to the resource. 5. In the storage account menu, under **Data storage**, click **Containers**. 6. Click **+ Container**. Provide a name for your container (e.g., `uploads`), set the **Public access level** to **Private (no anonymous access)**. This is the recommended setting for security; we will use SAS tokens for controlled access. Click **Create**. **Note** Public access vs. SAS tokens: While you *can* set container access levels to allow public read access (`Blob` or `Container`), it's generally more secure to keep containers private and use **Shared Access Signatures (SAS)** tokens for both uploads and downloads. SAS tokens provide temporary, granular permissions. This guide focuses on using SAS tokens for uploads. For serving files, you can either generate read-only SAS tokens on demand or, if needed, set the container to public `Blob` access. 7. **Get connection string:** - In your storage account menu, under **Security + networking**, click **Access keys**. - Copy one of the **Connection strings**. This will be used by your backend application to authenticate with Azure Blob Storage. Store it securely. ## Configure CORS for client-side uploads If your application involves uploading files **directly from a web browser** using the generated SAS URLs, you must configure Cross-Origin Resource Sharing (CORS) on your Azure Storage account. CORS rules tell Azure Storage which web domains are allowed to make requests (like `PUT` requests for uploads) to your blob service endpoint. Without proper CORS rules, browser security restrictions will block these direct uploads. Follow Azure's guide to [Configure CORS for Azure Storage](https://docs.microsoft.com/en-us/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). You can configure CORS rules via the Azure portal (Storage account > Settings > Resource sharing (CORS) > Blob service tab). Here's an example CORS configuration allowing `PUT` uploads and `GET` requests from your deployed frontend application and your local development environment: - **Allowed origins:** `https://your-production-app.com`, `http://localhost:3000` (Replace with your actual domains/ports) - **Allowed methods:** `PUT`, `GET` - **Allowed headers:** `*` (Or be more specific, e.g., `Content-Type`, `x-ms-blob-type`) - **Exposed headers:** `*` - **Max age (seconds):** `3600` (Example: 1 hour) ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the blobs uploaded to Azure Storage. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Create a table including the blob name, URL, user ID, and timestamp: ```sql CREATE TABLE IF NOT EXISTS azure_files ( id SERIAL PRIMARY KEY, blob_name TEXT NOT NULL UNIQUE, -- Name (path/filename) in Azure Blob Storage container file_url TEXT NOT NULL, -- Publicly accessible URL (base URL, SAS might be needed for access) user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. Add other relevant columns as needed (e.g., `content_type`, `size`). **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `azure_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the blobs within the Azure container itself is managed via Azure RBAC, SAS tokens, and container access level settings. ## Upload files to Azure Blob Storage and store metadata in Neon The recommended pattern for client-side uploads to Azure Blob Storage involves **SAS (Shared Access Signature) URLs**. Your backend generates a temporary URL containing a SAS token that grants specific permissions (like writing a blob) for a limited time. The client uses this SAS URL to upload the file directly to Azure Blob Storage. Afterwards, your backend saves the file's metadata to Neon. This requires two backend endpoints: 1. `/generate-upload-sas`: Generates the temporary SAS URL for the client. 2. `/save-metadata`: Records the metadata in Neon after the client confirms successful upload. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, [`@azure/storage-blob`](https://www.npmjs.com/package/@azure/storage-blob) for Azure interaction, and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install @azure/storage-blob @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file: ```env # Azure Blob Storage Config AZURE_STORAGE_CONNECTION_STRING="your_storage_account_connection_string" AZURE_STORAGE_CONTAINER_NAME=your_container_name # e.g., uploads # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { BlobServiceClient, generateBlobSASQueryParameters, BlobSASPermissions, SASProtocol, } from '@azure/storage-blob'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; import { randomUUID } from 'crypto'; const AZURE_CONNECTION_STRING = process.env.AZURE_STORAGE_CONNECTION_STRING; const AZURE_CONTAINER_NAME = process.env.AZURE_STORAGE_CONTAINER_NAME; const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_CONNECTION_STRING); const containerClient = blobServiceClient.getContainerClient(AZURE_CONTAINER_NAME); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. const authMiddleware = async (c, next) => { c.set('userId', 'user_123'); await next(); }; // 1. Generate SAS URL for upload app.post('/generate-upload-sas', authMiddleware, async (c) => { try { const { fileName, contentType } = await c.req.json(); if (!fileName || !contentType) throw new Error('fileName and contentType required'); const blobName = `${randomUUID()}-${fileName}`; const blobClient = containerClient.getBlockBlobClient(blobName); const fileUrl = blobClient.url; const sasOptions = { containerName: AZURE_CONTAINER_NAME, blobName: blobName, startsOn: new Date(), expiresOn: new Date(new Date().valueOf() + 300 * 1000), // 5 minutes expiry permissions: BlobSASPermissions.parse('w'), // Write permission protocol: SASProtocol.Https, contentType: contentType, }; const sasToken = generateBlobSASQueryParameters( sasOptions, blobServiceClient.credential ).toString(); const sasUrl = `${fileUrl}?${sasToken}`; return c.json({ success: true, sasUrl, blobName, fileUrl }); } catch (error) { console.error('SAS Generation Error:', error.message); return c.json({ success: false, error: 'Failed to prepare upload URL' }, 500); } }); // 2. Save metadata after client upload confirmation app.post('/save-metadata', authMiddleware, async (c) => { try { const { blobName, fileUrl } = await c.req.json(); const userId = c.get('userId'); if (!blobName || !fileUrl) throw new Error('blobName and fileUrl required'); await sql` INSERT INTO azure_files (blob_name, file_url, user_id) VALUES (${blobName}, ${fileUrl}, ${userId}) `; console.log(`Metadata saved for Azure blob: ${blobName}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes Neon client (`sql`), Hono (`app`), and Azure `BlobServiceClient` using the connection string. 2. **Authentication:** Placeholder `authMiddleware` needs replacing with actual user validation. 3. **Upload endpoints:** - **`/generate-upload-sas`:** Creates a unique `blobName`, gets a `BlockBlobClient`, and generates a SAS token using `generateBlobSASQueryParameters` with write permissions (`w`) and a short expiry. It returns the full `sasUrl` (base URL + SAS token), the `blobName`, and the base `fileUrl`. - **`/save-metadata`:** Called by the client _after_ successful upload. Saves the `blobName`, base `fileUrl`, and `userId` into the `azure_files` table in Neon. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), [`azure-storage-blob`](https://pypi.org/project/azure-storage-blob/) (Azure SDK for Python), and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask azure-storage-blob psycopg2-binary python-dotenv python-dateutil ``` Create a `.env` file: ```env # Azure Blob Storage Config AZURE_STORAGE_CONNECTION_STRING="your_storage_account_connection_string" AZURE_STORAGE_CONTAINER_NAME=your_container_name # e.g., uploads # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import uuid from datetime import datetime, timedelta, timezone import psycopg2 from azure.storage.blob import BlobSasPermissions, BlobServiceClient, generate_blob_sas from dotenv import load_dotenv from flask import Flask, jsonify, request load_dotenv() AZURE_CONNECTION_STRING = os.getenv("AZURE_STORAGE_CONNECTION_STRING") AZURE_CONTAINER_NAME = os.getenv("AZURE_STORAGE_CONTAINER_NAME") DATABASE_URL = os.getenv("DATABASE_URL") blob_service_client = BlobServiceClient.from_connection_string( AZURE_CONNECTION_STRING ) container_client = blob_service_client.get_container_client(AZURE_CONTAINER_NAME) app = Flask(__name__) # Use a global PostgreSQL connection instead of creating a new one for each request in production def get_db_connection(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate SAS URL for upload @app.route("/generate-upload-sas", methods=["POST"]) def generate_upload_sas_route(): try: user_id = get_authenticated_user_id(request) data = request.get_json() file_name = data.get("fileName") content_type = data.get("contentType") if not file_name or not content_type: raise ValueError("fileName and contentType are required in JSON body.") blob_name = f"{uuid.uuid4()}-{file_name}" blob_client = container_client.get_blob_client(blob_name) file_url = blob_client.url start_time = datetime.now(timezone.utc) expiry_time = start_time + timedelta(minutes=5) # 5 minutes expiry sas_token = generate_blob_sas( account_name=blob_service_client.account_name, container_name=AZURE_CONTAINER_NAME, blob_name=blob_name, account_key=blob_service_client.credential.account_key, permission=BlobSasPermissions(write=True), # Write permission for upload expiry=expiry_time, start=start_time, content_type=content_type ) sas_url = f"{file_url}?{sas_token}" return jsonify( { "success": True, "sasUrl": sas_url, "blobName": blob_name, "fileUrl": file_url, } ), 200 except ValueError as e: print(f"SAS Generation Input Error: {e}") return jsonify({"success": False, "error": str(e)}), 400 except Exception as e: print(f"SAS Generation Error: {e}") return jsonify({"success": False, "error": "Failed to prepare upload URL"}), 500 # 2. Save metadata after client upload confirmation @app.route("/save-metadata", methods=["POST"]) def save_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) data = request.get_json() blob_name = data.get("blobName") file_url = data.get("fileUrl") if not blob_name or not file_url: raise ValueError("blobName and fileUrl are required in JSON body.") conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO azure_files (blob_name, file_url, user_id) VALUES (%s, %s, %s) """, (blob_name, file_url, user_id), ) conn.commit() print(f"Metadata saved for Azure blob: {blob_name}") return jsonify({"success": True}), 201 except psycopg2.Error as db_err: print(f"Database Save Error: {db_err}") return jsonify( { "success": False, "error": "Failed to save metadata", } ), 500 except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify( {"success": False, "error": "Server error during metadata save."} ), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": port = int(os.environ.get("PORT", 3000)) app.run(host="0.0.0.0", port=port, debug=True) ``` **Explanation** 1. **Setup:** Initializes Flask, `BlobServiceClient`, and `psycopg2` using environment variables. 2. **Authentication:** A placeholder `get_authenticated_user_id` function is included. **Replace this with real authentication logic**. 3. **Upload endpoints:** - **`/generate-upload-sas`:** Creates a unique `blobName`, gets the base `fileUrl`, and generates a SAS token using `generate_blob_sas` with write permissions and a short expiry. Returns the full `sasUrl`, `blobName`, and base `fileUrl`. - **`/save-metadata`:** Called by the client _after_ successful upload. Saves the `blobName`, base `fileUrl`, and `userId` into the `azure_files` table using `psycopg2`. 4. In production, you should use a global PostgreSQL connection instead of creating a new one for each request. This is important for performance and resource management. ## Testing the upload workflow Testing the SAS URL flow involves multiple steps: 1. **Get SAS URL:** Send a `POST` request to your `/generate-upload-sas` endpoint with a JSON body containing `fileName` and `contentType`. **Using cURL:** ```bash curl -X POST http://localhost:3000/generate-upload-sas \ -H "Content-Type: application/json" \ -d '{"fileName": "test-azure.txt", "contentType": "text/plain"}' ``` You should receive a JSON response with a `sasUrl`, `blobName`, and `fileUrl`: ```json { "success": true, "sasUrl": "https://.blob.core.windows.net//?", "blobName": "", "fileUrl": "https://.blob.core.windows.net//" } ``` Note the `sasUrl`, `blobName`, and `fileUrl` from the response. You will use these in the next steps. 2. **Upload file to Azure:** Use the received `sasUrl` to upload the actual file using an HTTP `PUT` request. You also need to set the `Content-Type` header to match what was specified during SAS generation and `x-ms-blob-type: BlockBlob`. **Using cURL:** ```bash curl -X PUT "" \ --upload-file /path/to/your/test-azure.txt \ -H "Content-Type: text/plain" \ -H "x-ms-blob-type: BlockBlob" ``` A successful upload returns HTTP `201 Created`. 3. **Save metadata:** Send a `POST` request to your `/save-metadata` endpoint with the `blobName` and base `fileUrl` from step 1. **Using cURL:** ```bash curl -X POST http://localhost:3000/save-metadata \ -H "Content-Type: application/json" \ -d '{"blobName": "", "fileUrl": ""}' ``` You should receive a JSON response indicating success: ```json { "success": true } ``` **Expected outcome:** - The file appears in your Azure Blob Storage container (check the Azure Portal). - A new row appears in your `azure_files` table in Neon. You can now integrate API calls to these endpoints from various parts of your application (e.g., web clients using JavaScript `fetch` API, mobile apps, backend services) to handle file uploads. ## Accessing file metadata and files Storing metadata in Neon allows your application to easily retrieve references to the files hosted on Azure Blob Storage. Query the `azure_files` table from your application's backend when needed. **Example SQL query:** Retrieve files for user 'user_123': ```sql SELECT id, blob_name, -- Name (path/filename) in Azure container file_url, -- Base URL of the blob user_id, -- User associated with the file upload_timestamp FROM azure_files WHERE user_id = 'user_123'; -- Use actual authenticated user ID ``` **Using the data:** - The query returns metadata stored in Neon. - The `file_url` column contains the base URL of the blob. - **Accessing the file:** - If your container allows public `Blob` access, this `file_url` might be directly usable. - If your container is **private** (recommended), you need to generate a **read-only SAS token** for the specific `blob_name` on demand using your backend (similar to the upload SAS generation, but with `BlobSASPermissions.parse("r")` or `BlobSasPermissions(read=True)`) and append it to the `file_url`. This provides secure, temporary read access. - Use the resulting URL (base URL or URL with read SAS token) in your application (e.g., `` tags, download links). For example here's how to generate a read SAS URL: Tab: JavaScript ```javascript import { BlobServiceClient, generateBlobSASQueryParameters, BlobSASPermissions, SASProtocol, } from '@azure/storage-blob'; const AZURE_CONTAINER_NAME = process.env.AZURE_STORAGE_CONTAINER_NAME; const blobServiceClient = BlobServiceClient.fromConnectionString( process.env.AZURE_STORAGE_CONNECTION_STRING ); async function generateReadOnlySasUrl(blobName, expiryMinutes = 15) { const containerClient = blobServiceClient.getContainerClient(AZURE_CONTAINER_NAME); const blobClient = containerClient.getBlobClient(blobName); const sasOptions = { containerName: AZURE_CONTAINER_NAME, blobName: blobName, startsOn: new Date(), expiresOn: new Date(new Date().valueOf() + expiryMinutes * 60 * 1000), permissions: BlobSASPermissions.parse('r'), // Read ('r') permission protocol: SASProtocol.Https, }; const sasToken = generateBlobSASQueryParameters( sasOptions, blobServiceClient.credential ).toString(); const sasUrl = `${blobClient.url}?${sasToken}`; return sasUrl; } // Replace '' with the actual blob name generateReadOnlySasUrl('') .then((url) => { console.log('Read-only SAS URL:', url); }) .catch((error) => { console.error('Error generating read SAS URL:', error); }); ``` Tab: Python ```python import os from datetime import datetime, timedelta, timezone from azure.storage.blob import BlobSasPermissions, BlobServiceClient, generate_blob_sas AZURE_CONTAINER_NAME = os.getenv("AZURE_STORAGE_CONTAINER_NAME") blob_service_client = BlobServiceClient.from_connection_string( os.getenv("AZURE_STORAGE_CONNECTION_STRING") ) def generate_read_only_sas_url(blob_name, expiry_minutes=15): blob_client = blob_service_client.get_blob_client( container=AZURE_CONTAINER_NAME, blob=blob_name ) start_time = datetime.now(timezone.utc) expiry_time = start_time + timedelta(minutes=expiry_minutes) sas_token = generate_blob_sas( account_name=blob_service_client.account_name, container_name=AZURE_CONTAINER_NAME, blob_name=blob_name, account_key=blob_service_client.credential.account_key, permission=BlobSasPermissions(read=True), # Read permission expiry=expiry_time, start=start_time, ) sas_url = f"{blob_client.url}?{sas_token}" return sas_url if __name__ == "__main__": # Replace '' with the actual blob name test_blob_name = "" read_url = generate_read_only_sas_url(test_blob_name) print(f"Read-only SAS URL: {read_url}") ``` **Note** Private containers & read access: For private containers, always generate short-lived read SAS tokens when a user needs to access a file. Store only the `blob_name` and base `file_url` (or just `blob_name`) in Neon, and construct the full SAS URL in your backend when serving the file reference to the client. This pattern effectively separates file storage and delivery concerns (handled by Azure Blob Storage) from structured metadata management (handled by Neon), leveraging the strengths of both services. ## Resources - [Azure Blob Storage documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/) - [Azure Storage Shared Access Signatures (SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview) - [Neon Documentation](https://neon.com/docs/introduction) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-backblaze-b2.txt # File storage with Backblaze B2 > The document outlines the process for integrating Backblaze B2 file storage with Neon, detailing configuration steps and necessary settings for seamless data management within the Neon environment. ## Source - [File storage with Backblaze B2 HTML](https://neon.com/docs/guides/backblaze-b2): The original HTML version of this documentation [Backblaze B2 Cloud Storage](https://www.backblaze.com/cloud-storage) is an S3-compatible object storage service known for its affordability and ease of use. It's suitable for storing large amounts of unstructured data like backups, archives, images, videos, and application assets. This guide demonstrates how to integrate Backblaze B2 with Neon by storing file metadata (like the file id, name and URL) in your Neon database, while using B2 for file storage. ## Prerequisites ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create a Backblaze account and B2 bucket 1. Sign up for or log in to your [Backblaze account](https://www.backblaze.com/sign-up/cloud-storage?referrer=getstarted). 2. Navigate to **B2 Cloud Storage** > **Buckets** in the left sidebar. 3. Click **Create a Bucket**. Provide a globally unique bucket name (e.g., `my-neon-app-b2-files`), choose whether files should be **Private** or **Public**. For this guide, we'll use **Public** for simplicity, but **Private** is recommended for production applications where you want to control access to files. 4. **Create application key:** - Navigate to **B2 Cloud Storage** > **Application Keys** in the left sidebar. - Click **+ Add a New Application Key**. - Give the key a name (e.g., `neon-app-b2-key`). - **Crucially**, restrict the key's access: Select **Allow access to Bucket(s)** and choose the bucket you just created (e.g., `my-neon-app-b2-files`). - Select **Read and Write** for the **Type of Access**. - Leave other fields blank unless needed (e.g., File name prefix). - Click **Create New Key**. - Copy the **Key ID** and **Application Key**. These will be used in your application to authenticate with B2. 5. **Find S3 endpoint:** - Navigate back to **B2 Cloud Storage** > **Buckets**. - Find your bucket and note the **Endpoint** URL listed (e.g., `s3.us-west-000.backblazeb2.com`). You'll need this S3-compatible endpoint for the SDK configuration. ## Configure CORS for client-side uploads If your application involves uploading files **directly from a web browser** using the generated presigned URLs, you must configure Cross-Origin Resource Sharing (CORS) rules for your B2 bucket. CORS rules tell B2 which web domains are allowed to make requests (like `PUT` requests for uploads) to your bucket. Without proper CORS rules, browser security restrictions will block these direct uploads. Follow Backblaze's guide to [Cross-Origin Resource Sharing Rules](https://www.backblaze.com/docs/cloud-storage-cross-origin-resource-sharing-rules). You configure CORS rules in the B2 Bucket Settings page in the Backblaze web UI. Here's an example CORS configuration allowing `http://localhost:3000` to view and upload files: > In a production environment, replace `http://localhost:3000` with your actual domain ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the objects uploaded to B2. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Create a table including the B2 file name (object key), file URL, user ID, and timestamp: ```sql CREATE TABLE IF NOT EXISTS b2_files ( id SERIAL PRIMARY KEY, object_key TEXT NOT NULL UNIQUE, -- Key (path/filename) in B2 file_url TEXT, -- Base public URL user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` > Storing the full public `file_url` is only useful if the bucket is public. For private buckets, you'll typically only store the `object_key` and generate presigned download URLs on demand. 2. Run the SQL statement. Add other relevant columns as needed (e.g., `content_type`, `size` if needed). **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `b2_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the objects within the B2 bucket itself is managed via B2 bucket settings (public/private), Application Key permissions, and presigned URL settings. ## Upload files to B2 and store metadata in Neon Leveraging B2's S3 compatibility, the recommended pattern for client-side uploads involves **presigned upload URLs**. Your backend generates a temporary URL that the client uses to upload the file directly to B2. Afterwards, your backend saves the file's metadata to Neon. This requires two backend endpoints: 1. `/presign-b2-upload`: Generates the temporary presigned URL. 2. `/save-b2-metadata`: Records the metadata in Neon after the client confirms successful upload. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, [`@aws-sdk/client-s3`](https://www.npmjs.com/package/@aws-sdk/client-s3) and [`@aws-sdk/s3-request-presigner`](https://www.npmjs.com/package/@aws-sdk/s3-request-presigner) for B2 interaction (due to S3 compatibility), and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file: ```env # Backblaze B2 Credentials & Config B2_APPLICATION_KEY_ID=your_b2_key_id B2_APPLICATION_KEY=your_b2_application_key B2_BUCKET_NAME=your_b2_bucket_name B2_ENDPOINT_URL=https://your_b2_s3_endpoint # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; import { randomUUID } from 'crypto'; const B2_BUCKET = process.env.B2_BUCKET_NAME; const B2_ENDPOINT = process.env.B2_ENDPOINT_URL; const endpointUrl = new URL(B2_ENDPOINT); const region = endpointUrl.hostname.split('.')[1]; const s3 = new S3Client({ endpoint: B2_ENDPOINT, region: region, credentials: { accessKeyId: process.env.B2_APPLICATION_KEY_ID, secretAccessKey: process.env.B2_APPLICATION_KEY, }, }); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. const authMiddleware = async (c, next) => { c.set('userId', 'user_123'); await next(); }; // 1. Generate presigned URL for upload app.post('/presign-b2-upload', authMiddleware, async (c) => { try { const { fileName, contentType } = await c.req.json(); if (!fileName || !contentType) throw new Error('fileName and contentType required'); const objectKey = `${randomUUID()}-${fileName}`; const publicFileUrl = `${B2_ENDPOINT}/${B2_BUCKET}/${objectKey}`; const command = new PutObjectCommand({ Bucket: B2_BUCKET, Key: objectKey, ContentType: contentType, }); const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 min expiry return c.json({ success: true, presignedUrl, objectKey, publicFileUrl }); } catch (error) { console.error('Presign Error:', error.message); return c.json({ success: false, error: 'Failed to prepare upload' }, 500); } }); // 2. Save metadata after client upload confirmation app.post('/save-b2-metadata', authMiddleware, async (c) => { try { const { objectKey, publicFileUrl } = await c.req.json(); const userId = c.get('userId'); if (!objectKey) throw new Error('objectKey required'); await sql` INSERT INTO b2_files (object_key, file_url, user_id) VALUES (${objectKey}, ${publicFileUrl}, ${userId}) `; console.log(`Metadata saved for B2 object: ${objectKey}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes Neon (`sql`), Hono (`app`), and the AWS S3 client (`s3`) configured with the B2 endpoint, region (extracted from endpoint), and B2 Application Key credentials. 2. **Authentication:** A placeholder `authMiddleware` is included. **Replace this with real authentication logic.** It currently just sets a static `userId` for demonstration. 3. **Upload endpoints:** - **`/presign-b2-upload`:** Generates a temporary secure URL (`presignedUrl`) using `@aws-sdk/s3-request-presigner` that allows uploading a file directly to B2. It returns the URL, the generated `objectKey`, and the standard S3 public URL. - **`/save-b2-metadata`:** Called by the client after successful upload. Saves the `objectKey`, `file_url`, and `userId` into the `b2_files` table in Neon using `@neondatabase/serverless`. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) (AWS SDK for Python, leveraging S3 compatibility), and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask boto3 psycopg2-binary python-dotenv ``` Create a `.env` file: ```env # Backblaze B2 Credentials & Config B2_APPLICATION_KEY_ID=your_b2_key_id B2_APPLICATION_KEY=your_b2_application_key B2_BUCKET_NAME=your_b2_bucket_name B2_ENDPOINT_URL=https://your_b2_s3_endpoint # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import uuid import boto3 import psycopg2 from dotenv import load_dotenv from urllib.parse import urlparse from flask import Flask, jsonify, request from botocore.exceptions import ClientError load_dotenv() B2_BUCKET_NAME = os.getenv("B2_BUCKET_NAME") B2_ENDPOINT_URL = os.getenv("B2_ENDPOINT_URL") parsed_endpoint = urlparse(B2_ENDPOINT_URL) region_name = parsed_endpoint.hostname.split(".")[1] s3_client = boto3.client( service_name="s3", endpoint_url=B2_ENDPOINT_URL, aws_access_key_id=os.getenv("B2_APPLICATION_KEY_ID"), aws_secret_access_key=os.getenv("B2_APPLICATION_KEY"), region_name=region_name ) app = Flask(__name__) # Use a global PostgreSQL connection instead of creating a new one for each request in production def get_db_connection(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate presigned URL for upload @app.route("/presign-b2-upload", methods=["POST"]) def presign_b2_upload_route(): try: user_id = get_authenticated_user_id(request) data = request.get_json() file_name = data.get("fileName") content_type = data.get("contentType") if not file_name or not content_type: raise ValueError("fileName and contentType required in JSON body") object_key = f"{uuid.uuid4()}-{file_name}" public_file_url = f"{B2_ENDPOINT_URL}/{B2_BUCKET_NAME}/{object_key}" presigned_url = s3_client.generate_presigned_url( "put_object", Params={ "Bucket": B2_BUCKET_NAME, "Key": object_key, "ContentType": content_type }, ExpiresIn=300, # 5 minutes expiry ) return jsonify( { "success": True, "presignedUrl": presigned_url, "objectKey": object_key, "publicFileUrl": public_file_url, } ), 200 except (ClientError, ValueError) as e: print(f"Presign Error: {e}") return jsonify( {"success": False, "error": f"Failed to prepare upload: {e}"} ), 500 except Exception as e: print(f"Unexpected Presign Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 # 2. Save metadata after client upload confirmation @app.route("/save-b2-metadata", methods=["POST"]) def save_b2_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) data = request.get_json() object_key = data.get("objectKey") public_file_url = data.get("publicFileUrl") if not object_key or not public_file_url: raise ValueError("objectKey and publicFileUrl required in JSON body") conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO b2_files (object_key, file_url, user_id) VALUES (%s, %s, %s) """, (object_key, public_file_url, user_id), ) conn.commit() print(f"Metadata saved for B2 object: {object_key}") return jsonify({"success": True}), 201 except (psycopg2.Error, ValueError) as e: print(f"Metadata Save Error: {e}") return jsonify( {"success": False, "error": "Failed to save metadata"} ), 500 except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": port = int(os.environ.get("PORT", 3000)) app.run(host="0.0.0.0", port=port, debug=True) ``` **Explanation** 1. **Setup:** Initializes Flask, `boto3` S3 client configured for B2 (endpoint, region, credentials), and `psycopg2`. 2. **Authentication:** Placeholder `get_authenticated_user_id` needs replacing. 3. **Upload endpoints:** - **`/presign-b2-upload`:** Generates `object_key` and optional `public_file_url`. Uses `boto3`'s `generate_presigned_url` for `'put_object'` to get a temporary upload URL. - **`/save-b2-metadata`:** Called after client upload. Saves `object_key`, `public_file_url` (can be `None`), and `userId` to the `b2_files` table. Includes basic error handling for duplicates. 4. In production, use a global PostgreSQL connection pool. ## Testing the upload workflow Testing the presigned URL flow involves multiple steps: 1. **Get presigned URL:** Send a `POST` request to your `/presign-b2-upload` endpoint with a JSON body containing `fileName` and `contentType`. **Using cURL:** ```bash curl -X POST http://localhost:3000/presign-b2-upload \ -H "Content-Type: application/json" \ -d '{"fileName": "test-b2.png", "contentType": "image/png"}' ``` You should receive a JSON response with a `presignedUrl`, `objectKey`, and `publicFileUrl`: ```json { "success": true, "presignedUrl": "https://s3..backblazeb2.com//?...", "objectKey": "", "publicFileUrl": "https://s3..backblazeb2.com//" } ``` Note the `presignedUrl`, `objectKey`, and `publicFileUrl` from the response. You will use these in the next steps 2. **Upload file to B2:** Use the received `presignedUrl` to upload the actual file using an HTTP `PUT` request. The `Content-Type` header must match the one used to generate the URL. **Using cURL:** ```bash curl -X PUT "" \ --upload-file /path/to/your/test-b2.png \ -H "Content-Type: image/png" ``` Replace `` with the actual URL from step 1. A successful upload typically returns HTTP `200 OK`. 3. **Save metadata:** Send a `POST` request to your `/save-b2-metadata` endpoint with the `objectKey` and optionally `publicFileUrl` from step 1. **Using cURL:** ```bash curl -X POST http://localhost:3000/save-b2-metadata \ -H "Content-Type: application/json" \ -d '{"objectKey": "", "publicFileUrl": ""}' ``` You should receive a JSON response indicating success: ```json { "success": true } ``` **Expected outcome:** - The file appears in your B2 bucket (check the Backblaze B2 web UI). - A new row appears in your `b2_files` table in Neon. ## Accessing file metadata and files Storing metadata in Neon allows your application to easily retrieve references to the files hosted on B2. Query the `b2_files` table from your application's backend when needed. **Example SQL query:** Retrieve files for user 'user_123': ```sql SELECT id, object_key, -- Key (path/filename) in B2 file_url, -- Base public URL (only useful if bucket is Public) user_id, -- User associated with the file upload_timestamp FROM b2_files WHERE user_id = 'user_123'; -- Use actual authenticated user ID ``` **Using the data:** - The query returns metadata stored in Neon. - **Accessing the file:** - If your bucket is **Public**, you can use the `file_url` directly in your application (e.g., `` tags, download links). - If your bucket is **Private**, the stored `file_url` is likely irrelevant. You **must** generate a **presigned download URL** (a GET URL) on demand using your backend. This involves a similar process to generating the upload URL but using `GetObjectCommand` (JS) or `generate_presigned_url('get_object', ...)` (Python) with read permissions. This provides secure, temporary read access. This pattern effectively separates file storage and delivery concerns (handled by Backblaze B2) from structured metadata management (handled by Neon), leveraging the strengths of both services. ## Resources - [Backblaze B2 Cloud Storage documentation](https://www.backblaze.com/docs/cloud-storage-developer-quick-start-guide) - [Backblaze B2 S3 Compatible API](https://www.backblaze.com/docs/cloud-storage-s3-compatible-api) - [Backblaze B2 Application Keys](https://www.backblaze.com/docs/cloud-storage-application-keys) - [Neon documentation](https://neon.com/docs/introduction) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-backup-restore.txt # Backup & restore > The "Backup & Restore" documentation for Neon outlines the procedures for creating backups and restoring data within the Neon database environment, ensuring data integrity and availability. ## Source - [Backup & restore HTML](https://neon.com/docs/guides/backup-restore): The original HTML version of this documentation **Note** Snapshots in Beta: The **Snapshots** feature is now in Beta and available to all users. Snapshot limits: 1 on the Free plan and 10 on paid plans. Automated snapshot schedules are available on paid plans except for the Agent plan. If you need higher limits, please reach out to [Neon support](https://neon.com/docs/introduction/support). Use the **Backup & restore** page in the Neon Console to instantly restore a branch to a previous state or create and restore snapshots of your data. This feature combines **instant point-in-time restore** and **snapshots** to help you recover from accidental changes, data loss, or schema issues. The **Enhanced view** toggle in the Neon Console lets you access the Backup & Restore page with snapshot capabilities. When enabled, you can create and manage snapshots alongside instant point-in-time restore. Toggle it off to return to the original Restore page if needed. --- ## What you can do - ✅ Instantly restore a branch - ✅ Preview data before restoring - ✅ Create snapshots manually - ✅ Schedule automated snapshots - ✅ Restore from a snapshot --- ## Instantly restore a branch Instantly restore your branch to a specific time in its history. > Instant restore is only supported for root branches. Typically, this is your project's `production` branch. [Learn more](https://neon.com/docs/manage/branches#root-branch). Tab: Console You can restore from any time that falls within your project's [restore window](https://neon.com/docs/manage/projects#configure-your-restore-window). 1. **Select a time** Click the date & time selector, choose a date & time, and click **Restore**. You'll see a confirmation modal that outlines what will happen: - Your branch will be restored to its state at the selected date & time - Your current branch will be saved as a backup, in case you want to revert At this point, you can either click **Restore** to proceed or select **Preview data** to inspect the data first. 2. **Preview the data** To preview the data to make sure you've selected the right restore point, you can: - **Browse data** in the **Tables** view to explore a read-only view of the data at the selected point in time - **Query data** directly from the restore page to run read-only SQL against the selected restore point - **Compare schemas** with the schema diff tool to see how your current schema differs from the one at the selected restore point 3. **Restore** Click **Restore** to complete the restore operation, or **Cancel** to back out. You can also restore directly from any of the **Preview data** pages. When you restore, a backup branch is automatically created (named `_old_`) in case you need to revert back. You can find this branch on the **Branches** page. For information about removing backup branches, see [Deleting backup branches](https://neon.com/docs/introduction/branch-restore#deleting-backup-branches). Tab: CLI To restore a branch to an earlier point in time, use the syntax `^self` in the `` field of the `branches restore` command. For example: ```bash neon branches restore development ^self@2025-01-01T00:00:00Z --preserve-under-name development_old ``` This command resets the target branch `development` to its state at the start of 2025. The command also preserves the original state of the branch in a backup file called `development_old` using the `preserve-under-name` parameter (mandatory when resetting to self). For full CLI documentation for `branches restore`, see [branches restore](https://neon.com/docs/reference/cli-branches#restore). Tab: API To restore a branch using the API, use the endpoint: ```bash POST /projects/{project_id}/branches/{branch_id_to_restore}/restore ``` This endpoint lets you restore a branch using the following request parameters: | Parameter | Type | Required | Description | | ----------------------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **source_branch_id** | `string` | Yes | The ID of the branch you want to restore from. To restore to the latest data (head), omit `source_timestamp` and `source_lsn`. To restore a branch to its own history (`source_branch_id` equals branch's own Id), you must include: - A time period: `source_timestamp` or `source_lsn` - A backup branch: `preserve_under_name` | | **source_lsn** | `string` | No | A Log Sequence Number (LSN) on the source branch. The branch will be restored with data up to this LSN. | | **source_timestamp** | `string` | No | A timestamp indicating the point in time on the source branch to restore from. Use RFC 3339 format for the date-time string. | | **preserve_under_name** | `string` | No | If specified, a backup is created: the latest version of the branch's state is preserved under a new branch using the specified name. **Note:** This field is required if: - The branch has children. All child branches will be moved to the newly created branch. - You are restoring a branch to its own history (`source_branch_id` equals the branch's own ID). | #### Restoring a branch to its own history In the following example, we are restoring branch `br-twilight-river-31791249` to an earlier point in time, `2024-02-27T00:00:00Z`, with a new backup branch named `backup-before-restore`. Note that the branch id in the `url` matches the value for `source_branch_id`. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/floral-disk-86322740/branches/br-twilight-river-31791249/restore \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "source_branch_id": "br-twilight-river-31791249", "source_timestamp": "2024-02-27T00:00:00Z", "preserve_under_name": "backup-before-restore" } ' | jq ``` ## Create snapshots manually Snapshots capture the state of your branch at a point in time. You can create snapshots manually (on root branches only). You can restore to these snapshots from any branch in your project. Tab: Console To create a snapshot manually, click **Create snapshot**. This captures the current state of your data and saves it as a **Manual snapshot**. It's a good idea to create a snapshot before making significant changes to your schema or data. Tab: API You can create a snapshot from a branch using the [Create snapshot](https://api-docs.neon.tech/reference/createsnapshot) endpoint. A snapshot can be created from a specific timestamp (RFC 3339 format) or LSN (e.g. 16/B3733C50) within the branch's restore window. The `timestamp` and `lsn` parameters are mutually exclusive — you can use one or the other, not both. ```bash curl -X POST "https://console.neon.tech/api/v2/projects/project_id/branches/branch_id/snapshot" \ -H "Content-Type: application/json" \ -H 'authorization: Bearer $NEON_API_KEY' \ -d '{ "timestamp": "2025-07-29T21:00:00Z", "name": "my_snapshot", "expires_at": "2025-08-05T22:00:00Z" }' |jq ``` The parameters used in the example above: - `timestamp`: A point in time to create the snapshot from (RFC 3339 format). - `name`: A user-defined name for the snapshot. - `expires_at`: The timestamp when the snapshot will be automatically deleted (RFC 3339 format). **Related API references:** - [Create snapshot](https://api-docs.neon.tech/reference/createsnapshot) - [List project snapshots](https://api-docs.neon.tech/reference/listsnapshots) - [Update snapshot](https://api-docs.neon.tech/reference/updatesnapshot) - [Delete snapshot](https://api-docs.neon.tech/reference/deletesnapshot) ## Create snapshot schedules Schedule automated snapshots to run at regular intervals — daily, weekly, or monthly — to ensure consistent backups without manual intervention. Snapshot schedules are configured per branch and only apply to root branches. Tab: Console To create or modify a snapshot schedule: 1. **Open the schedule editor** From the **Backup & restore** page, click **Edit schedule** to open the backup schedule configuration dialog. 2. **Select a schedule frequency** Choose from the following options: - **No schedule** — Disables automated snapshots (default) - **Daily** — Creates a snapshot every day at a specified time - **Weekly** — Creates a snapshot on a specific day of the week - **Monthly** — Creates a snapshot on a specific day of the month 3. **Configure schedule details** Depending on your selected frequency, configure how often you want to create snapshots and how long to keep them. Once configured, snapshots created by the schedule will appear on the **Backup & restore** page with a label indicating they were created automatically. ### Snapshot retention Snapshots are automatically deleted after their retention period expires. You can adjust retention settings at any time by editing the schedule. Note that: - Shorter retention periods help manage snapshot limits on your plan - Deleted snapshots cannot be recovered - Manual snapshots are not affected by schedule retention settings Tab: API You can view and update backup schedules for branches using the Neon API. For complete API documentation, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **View backup schedule** Retrieves the current backup schedule configuration for a branch using the [View backup schedule](https://api-docs.neon.tech/reference/getsnapshotschedule) endpoint. ```bash GET /projects/{project_id}/branches/{branch_id}/backup_schedule ``` ```bash curl 'https://console.neon.tech/api/v2/projects//branches//backup_schedule' \ -H 'Authorization: Bearer $NEON_API_KEY' | jq ``` **Example response:** ```json { "schedule": [ { "frequency": "daily", "hour": 23, "retention_seconds": 1209600 } ] } ``` **Update backup schedule** Updates the backup schedule configuration for a branch using the [Update backup schedule](https://api-docs.neon.tech/reference/setsnapshotschedule) endpoint. You can set daily, weekly, or monthly schedules with custom retention periods. ```bash PUT /projects/{project_id}/branches/{branch_id}/backup_schedule ``` ```bash curl -X PUT 'https://console.neon.tech/api/v2/projects//branches//backup_schedule' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Content-Type: application/json' \ -d '{ "schedule": [ { "frequency": "daily", "hour": 23, "retention_seconds": 604800 } ] }' | jq ``` ## Restore from a snapshot You can restore from any snapshot in your project using one of two methods: - **One-step restore** – Instantly restore data from the snapshot into the existing branch. The branch name and connection string remain the same, but the branch ID changes. - **Multi-step restore** – Create a new branch from the snapshot. Use this option if you want to inspect or test the data before switching to the new branch. ### One-step restore Use this option if you want to restore the snapshot data immediately without inspecting the data first. Tab: Console 1. Locate the snapshot you want to use and click **Restore → One-step restore**. 2. The **One-step restore** modal explains the operation: - The restore operation will occur instantly. - The current branch will be restored to the snapshot state. - A branch named `` will be created as a backup. Other snapshots you may have taken previously remain attached to this branch. Click **Restore** to proceed with the operation. 3. Your branch is immediately restored to the snapshot state, and the `_old` branch is created, which you'll find on the **Branches** page in the Neon Console, as shown here: After you verify that the restore operation was successful, you can delete the backup branch if you no longer need it. Tab: API A one-step restore operation is performed using the [Restore snapshot](https://api-docs.neon.tech/reference/restoresnapshot) endpoint. This operation creates a new branch, restores the snapshot to the new branch, and moves computes from your current branch to the new branch. ```bash curl -X POST "https://console.neon.tech/api/v2/projects/project_id/snapshots/snapshot_id/restore?name=restored_branch" \ -H "Content-Type: application/json" \ -H 'authorization: Bearer $NEON_API_KEY' \ -d '{ "name": "restored_branch", "finalize_restore": false }' |jq ``` Parameters: - `name`: (Optional) Name of the new branch with the restored snapshot data. If not provided, a default branch name will be generated. - `finalize_restore`: Set to `true` to finalize the restore immediately. Finalizing the restore moves computes from your current branch to the new branch with the restored snapshot data for a seamless restore operation — no need to change the connection details in your application. - `target_branch_id`: (Optional but recommended) The ID of the branch you want to replace when finalizing the restore. If omitted, subsequent snapshot restores may target the branch renamed to ` (old)` from a previous restore, not your intended production branch. **Note**: If you plan to apply multiple snapshots in succession, always supply `target_branch_id` to ensure the restore is finalized against the correct branch (typically your current production branch). Without it, a second snapshot may be applied to the previously renamed "(old)" branch. **Related API references:** - [Restore snapshot](https://api-docs.neon.tech/reference/restoresnapshot) - [List project snapshots](https://api-docs.neon.tech/reference/listsnapshots) ### Multi-step restore Use this option if you need to inspect the restored data before you switch over to the new branch. Tab: Console 1. Locate the snapshot you want to use and click **Restore → Multi-step restore**. 2. The **Multi-step restore** modal explains the operation: - The restore will occur instantly - Your current branch will remain unchanged - A new branch with the snapshot data will be created 3. Clicking **Restore** creates the new branch with the restored data and directs you to the **Branch overview** page where you can: - **Get connection details** for the new branch to preview the data restored from the snapshot - **Migrate connections and settings** to move your database URLs and compute settings from the old branch to the new branch so you don't have to update the connection configuration in your application Tab: API 1. **Restore the snapshot to a new branch** The first step in a multi-step restore operation is to restore the snapshot to a new branch using the [Restore snapshot](https://api-docs.neon.tech/reference/restoresnapshot) endpoint: ```bash curl -X POST "https://console.neon.tech/api/v2/projects/project_id/snapshots/snapshot_id/restore" \ -H "Content-Type: application/json" \ -H 'authorization: Bearer $NEON_API_KEY' \ -d '{ "name": "my_restored_branch", "finalize_restore": false }' |jq ``` Parameters: - `name`: (Optional) Name of the new branch with the restored snapshot data. If not provided, a default branch name will be generated. - `finalize_restore`: Set to `false` so that you can inspect the new branch before finalizing the restore operation. - `target_branch_id`: (Optional but recommended) Specify the branch ID you intend to replace when you later finalize the restore (typically your production branch). Providing this avoids subsequent operations defaulting to the ` (old)` branch created by an earlier restore. **Note**: You can find the `snapshot_id` using the [List project snapshots](https://api-docs.neon.tech/reference/listsnapshots) endpoint. ```bash curl -X GET "https://console.neon.tech/api/v2/projects/project_id/snapshots" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $NEON_API_KEY" |jq ``` **Note**: If you will finalize the restore later or plan multiple restores, include `target_branch_id` during the restore call to anchor the operation to the correct target branch. 2. **Inspect the new branch** After restoring the snapshot, you can connect to the new branch and run queries to inspect the data. You can get the branch connection string from the Neon Console or using the [Retrieve connection URI](https://api-docs.neon.tech/reference/getconnectionuri) endpoint. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/project_id/connection_uri?branch_id=branch_id&database_name=db_name&role_name=role_name&pooled=true' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' |jq ``` 3. **Finalize the restore** If you're satisfied with the data on the new branch, finalize the restore operation using the [Finalize restore](https://api-docs.neon.tech/reference/finalizerestorebranch) endpoint. This step performs the following actions: - Moves your original branch's computes to the new branch and restarts the computes. - Renames the new branch to original branch's name. - Renames the original branch to ` (old)`. Other snapshots you may have taken remain attached to this branch. ```bash curl -X POST "https://console.neon.tech/api/v2/projects/project_id/branches/branch_id/finalize_restore" \ -H "Content-Type: application/json" \ -H 'authorization: Bearer $NEON_API_KEY' |jq ``` Parameters: - `project_id`: The Neon project ID. - `branch_id`: The branch ID of the branch created by the snapshot restore operation. --- # Source: https://neon.com/llms/guides-bemi.txt # Create an automatic audit trail with Bemi > The document explains how to set up an automatic audit trail using Bemi within Neon, detailing the steps to configure and manage audit logs for tracking database changes effectively. ## Source - [Create an automatic audit trail with Bemi HTML](https://neon.com/docs/guides/bemi): The original HTML version of this documentation [Bemi](https://bemi.io/) is an open-source solution that plugs into Postgres and ORMs such as Prisma, TypeORM, SQLAlchemy, and Ruby on Rails to track database changes automatically. It unlocks robust context-aware audit trails and time travel querying inside your application. Designed with simplicity and non-invasiveness in mind, Bemi doesn't require alterations to your existing database structure. It operates in the background, empowering you with data change tracking features. In this guide, we'll show you how to connect your Neon database to Bemi to create an automatic audit trail. ## Prerequisites - A [Bemi account](https://bemi.io/) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon Bemi tracks changes made in a Postgres database through Change Data Capture (CDC), which is a process of identifying and capturing changes made to your database tables in real-time. In Postgres, CDC is supported by the Postgres logical replication feature. In this step, we'll enable logical replication for your Neon Postgres project. **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from replica to logical for all databases in your Neon project. Once the `wal_level` setting is changed to logical, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [[Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Connect your Neon database to Bemi The following instructions assume you are connecting with a Postgres role created via the Neon Console, API, or CLI. These roles are automatically granted membership in a `neon_superuser` group, which has the Postgres `REPLICATION` privilege. The role you use to connect to Bemi requires this privilege. If you prefer to create a dedicated read-only role for use with Bemi, see [Use a read-only Postgres role for Bemi](https://neon.com/docs/guides/bemi#use-a-read-only-postgres-role-for-bemi). To connect your database to Bemi: 1. In Neon, retrieve your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It will look similar to this: ```sql postgresql://neondb_owner:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` 2. In Bemi, select **Databases** > **Add Database** to open the **Connect PostgreSQL Database** dialog. 3. Enter the Neon database connection details from your connection string. For example, given the connection string shown above, enter the details in the **Connect PostgreSQL Database** dialog as shown below. Your values will differ except for the port number. Neon uses the default Postgres port, `5432`. - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech - **Port**: 5432 - **Database Name**: neondb - **Username**: neondb_owner - **Password**: AbC123dEf You can also use the **Environment** field to specify whether the configuration is for a **Production**, **Staging**, or **Test** environment. 4. After entering your connection details, click **Add Database**. 5. Configure the tables you want to track changes for and choose whether to track new tables automatically. You can change this selection later, if necessary. Click **Save** to continue. 6. Wait a few minutes while Bemi provisions the infrastructure. When this operation completes, you've successfully configured a Bemi Postgres source for your Neon database. You'll be able to track data changes through the Bemi Browser UI page, where you can filter by **Operation** (`Create`, `Update`, `Delete`), **Table**, or **Primary Key**. You can also view data changes by environment if you have configured more than one. ## Use a read-only Postgres role for Bemi If preferred, you can create a dedicated read-only Postgres role for connecting your Neon database to Bemi. To do so, run the commands below. The commands assume your database resides in the `public` schema in Postgres. If your database resides in a different schema, adjust the commands as necessary to specify the correct schema name. - `CREATE ROLE`: Creates a new read-only user for Bemi to read database changes. - `CREATE PUBLICATION`: creates a "channel" that we'll subscribe to and track changes in real-time. - `REPLICA IDENTITY FULL`: enhances records stored in WAL to record the previous state ("before") in addition to the tracked by default new state ("after"). ```sql -- Create read-only user with REPLICATION permission CREATE ROLE [username] WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE REPLICATION PASSWORD '[password]'; -- Grant SELECT access to tables for selective tracking GRANT SELECT ON ALL TABLES IN SCHEMA public TO [username]; -- Grant SELECT access to new tables created in the future for selective tracking ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO [username]; -- Create "bemi" PUBLICATION to enable logical replication CREATE PUBLICATION bemi FOR TABLE ; -- Create a procedure to set REPLICA IDENTITY FULL for tables to track the "before" state on DB row changes CREATE OR REPLACE PROCEDURE _bemi_set_replica_identity() AS $$ DECLARE current_tablename TEXT; BEGIN FOR current_tablename IN SELECT tablename FROM pg_tables LEFT JOIN pg_class ON relname = tablename WHERE schemaname = 'public' AND relreplident != 'f' LOOP EXECUTE format('ALTER TABLE %I REPLICA IDENTITY FULL', current_tablename); END LOOP; END $$ LANGUAGE plpgsql; -- Call the created procedure CALL _bemi_set_replica_identity(); ``` **Note**: After creating a read-only role, you can find the connection details for this role by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Use this role when connecting your Neon database to Bemi, as described [above](https://neon.com/docs/guides/bemi#connect-your-neon-database-to-bemi). ## Allow inbound traffic If you're using Neon's IP Allow feature to limit IP addresses that can connect to Neon, you will need to allow inbound traffic from Bemi. [Contact Bemi](mailto:hi@bemi.io) to get the static IPs that need to be allowlisted. For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## References - [The ultimate guide to PostgreSQL data change tracking](https://blog.bemi.io/the-ultimate-guide-to-postgresql-data-change-tracking/) - [Logical replication - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html) - [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html) --- # Source: https://neon.com/llms/guides-benchmarking-latency.txt # Benchmarking latency in Neon's serverless Postgres > The document outlines methods for measuring and analyzing latency in Neon's serverless Postgres, focusing on benchmarking techniques to evaluate performance metrics specific to Neon's infrastructure. ## Source - [Benchmarking latency in Neon's serverless Postgres HTML](https://neon.com/docs/guides/benchmarking-latency): The original HTML version of this documentation Benchmarking database query latency is inherently complex, requiring careful consideration of numerous variables and testing methodologies. Neon's serverless Postgres environment adds additional layers to this complexity due to compute auto-suspension, connection protocol differences, and geographic distribution. This guide provides detailed methodologies for separating cold-start costs from operational latency, selecting optimal connection types, and designing tests that accurately reflect production conditions. ## Understanding cold vs. hot queries When benchmarking Neon databases, you'll encounter two distinct types of queries: - **Cold queries**: Occur when a previously suspended compute resource is activated to process a request. This activation typically adds a few hundred milliseconds of latency. Cold queries are common in development or test environments where databases aren't running continuously. - **Hot queries**: Execute on an already-active database instance, delivering consistent low latency. These represent typical performance in production environments where databases run continuously or remain active most of the time. Free-tier Neon databases automatically suspend after 5 minutes of inactivity. Paid plans allow you to disable the auto-suspend timeout to eliminate cold starts entirely. The Neon [Scale plan](https://neon.com/docs/introduction/plans) lets you disable or configure the setting, enabling you to customize your testing approach. See [Compute Lifecycle](https://neon.com/docs/introduction/compute-lifecycle) and [Auto-suspend Configuration](https://neon.com/docs/introduction/auto-suspend) for more details. ## Benchmarking methodology For accurate benchmarking, always measure cold and hot queries separately: 1. **Cold query testing**: - Ensure your database is in a suspended state - Make a request to trigger compute activation - Measure this connection time, which includes the startup overhead 2. **Hot query testing**: - After triggering compute activation with a cold query - Make subsequent requests within the active window - Measure these connection times, which reflect normal operation This methodology isolates the cold start overhead from normal operating performance, giving you a clearer picture of both typical performance and worst-case latency. ## Testing environment considerations Before running benchmarks, determine exactly what kind of latency you want to measure: - **Server-to-database latency**: If you're testing how quickly your application server can communicate with Neon, run benchmarks from the same location as your server. This is typically the most relevant metric for API performance. - **Client-to-database latency**: If you're testing direct client connections (rare in production), benchmark from client locations. Once you've determined what you're measuring: - **Test from your production region**: Geographic proximity is the primary factor in connection latency. Run benchmarks from the same region as your production environment to get accurate results. If your Neon database is in `us-east-1`, execute benchmarks from a server in that AWS region. - **Avoid localhost testing**: Testing from your local workstation doesn't reflect real-world conditions. In production, databases are typically queried from deployed servers, not client machines. Avoid testing across unrealistic distances that don't represent your production setup, as this introduces network overhead your users won't experience. For more on geographic factors affecting latency, see [Connection Latency and Timeouts](https://neon.com/docs/connect/connection-latency). ## Connection types and their impact [Neon's serverless driver](https://neon.com/docs/serverless/serverless-driver) supports two connection protocols: HTTP and WebSocket, each with distinctly different performance profiles. While some modern edge platforms now support direct TCP connections, many serverless environments still have limitations around persistent connections or TCP support. Neon's HTTP and WebSocket methods work across all serverless platforms, with each protocol having different latency characteristics and feature trade-offs depending on your query patterns. Understanding these differences is crucial for accurate benchmarking. For a comprehensive comparison, see [Choosing Connection Types](https://neon.com/docs/connect/choose-connection). ### HTTP connections - **Performance profile**: Optimized for queries with minimal connection overhead - **Use cases**: - Serverless functions that need low-latency query execution - Applications running multiple queries in parallel (HTTP can outperform WebSockets for parallel execution) - Scenarios where queries don't depend on each other - **Limitations**: Doesn't support sessions, interactive transactions, NOTIFY, or COPY protocol - **When to benchmark**: Use for measuring performance of stateless query operations, both individual and parallel - **Optimization**: Connection caching can further reduce latency ### WebSocket connections - **Performance profile**: Higher initial connection overhead but significantly faster for subsequent queries - **Use cases**: Optimal for applications that execute multiple queries over a maintained connection - **Features**: Supports full Postgres functionality including sessions, transactions, and all Postgres protocols - **When to benchmark**: Measure both connection establishment time and subsequent query execution separately - **Initialization**: Requires multiple round-trips between client and server to establish ### Benchmarking different connection types When comparing HTTP vs WebSocket connections, you'll typically observe different latency patterns: - **HTTP connections**: Show consistent low latency for individual queries and excel at parallel query execution - **WebSocket connections**: Show higher initial connection latency (about 3-5x slower than HTTP) but very low latency for subsequent sequential queries Consider your query patterns when choosing a connection type: - For parallel queries or independent operations, HTTP often performs better - For sequential queries where each depends on the previous result, WebSockets can be more efficient after the initial connection - The break-even point typically occurs around 2-3 sequential queries, though this varies by region and workload The runtime environment (Edge vs traditional serverless) can also impact connection performance characteristics. **Testing approach:** - For WebSockets: Establish the connection first, then measure query execution time separately. This reflects real-world usage where connections are reused. - For HTTP: Measure individual query execution time including any per-query connection overhead. For implementation details on both connection methods, refer to the [Serverless Driver Documentation](https://neon.com/docs/serverless/serverless-driver). ## Real-world usage pattern simulation Design your benchmarks to simulate how your application actually interacts with Neon: - **Use persistent connections**: For web servers or long-running applications, initialize the database connection before measuring query timings. Run a series of queries on this persistent connection. If your production environment uses connection pooling (which reuses database connections across requests), ensure your benchmarks account for this - pooled connections significantly reduce connection overhead after initial pool creation. See [Connection Pooling](https://neon.com/docs/connect/connection-pooling) for implementation details. - **Avoid one-query-per-process testing**: While useful for understanding cold starts, simplistic tests that connect, query, and disconnect don't reflect long-running application performance. - **Match your application pattern**: - If your app keeps connections alive, focus on post-connection query latency - If your app is serverless and frequently creates new connections, measure both scenarios but analyze them separately For examples of different connection patterns and their implementation, see [Connection Examples](https://neon.com/docs/connect/choose-connection). ## Neon latency benchmarks dashboard Neon provides a [Latency Benchmarks Dashboard](https://neon.com/demos/regional-latency) that measures latency between serverless functions and Neon databases across different regions. The benchmark specifically tracks: - Roundtrip time for executing simple SELECT queries - Network latency between function and database regions - Database connection establishment time - Performance differences between HTTP and WebSocket connections - Cold vs hot query performance This data helps you understand expected latencies based on your specific region and connection method. The dashboard is open source and [available on GitHub](https://github.com/neondatabase-labs/latency-benchmarks). If you encounter unexpected results during your benchmarking, consult the [Connection Troubleshooting](https://neon.com/docs/connect/connect-intro#troubleshoot-connection-issues) documentation to identify potential issues. ## Conclusion Benchmarking Neon requires understanding the unique characteristics of serverless Postgres. By separating cold and hot query measurements, testing from appropriate locations, and selecting the right connection methods, you'll obtain accurate performance metrics that reflect what your applications will experience in production. For further information on connection latency, see the [Neon Documentation](https://neon.com/docs/connect/connection-latency). --- # Source: https://neon.com/llms/guides-branch-archiving.txt # Branch archiving > The "Branch Archiving" documentation outlines the process for archiving inactive branches in Neon, detailing steps to manage storage efficiently by preserving branch data without active maintenance. ## Source - [Branch archiving HTML](https://neon.com/docs/guides/branch-archiving): The original HTML version of this documentation What you will learn: - How Neon archives inactive branches - How branches are unarchived - How to monitor branch archiving Related docs: - [Archive storage](https://neon.com/docs/introduction/architecture-overview#archive-storage) - [Branches list command (Neon CLI)](https://neon.com/docs/reference/cli-branches#list) - [Get branch details (Neon API)](https://api-docs.neon.tech/reference/getprojectbranch) To minimize storage costs, Neon automatically archives branches that are: - Older than **14 days**. - Have not been accessed for the past **24 hours** Both conditions must be true for a branch to be archived. However, a branch **cannot** be archived if it: - Has an **unarchived child branch**. - Has **computes running**. - Is **in transition** (e.g., currently being created or unarchived). - Is a **protected branch** ([learn more](https://neon.com/docs/guides/protected-branches)). **Note**: If your Neon project was inactive for more than a week before the introduction of branch archiving on November 11, 2024, the thresholds mentioned above do not come into effect until the next time you access branches in your project. ## Unarchiving a branch **No action is required to unarchive a branch. It happens automatically.** Connecting to an archived branch, querying it, or performing some other action that accesses it will trigger the unarchive process. Branches with large amounts of data may experience slightly slower connection and query times while a branch is being unarchived. For projects on paid Neon plans, there is a limit of **100 unarchived branches per project**. If a project reaches this limit, Neon archives branches **without waiting** for the 14-day or 24-hour archiving criteria described above. **Note**: When a branch is unarchived, its parent branches, all the way up to the root branch, are also unarchived. The following actions will automatically unarchive a branch, transferring the branch's data back to regular Neon storage: - [Connecting to or querying the branch from a client or application](https://neon.com/docs/connect/connect-from-any-app) - [Querying the branch from the Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) - [Viewing the branch on the Tables page in the Neon Console](https://neon.com/docs/guides/tables) - [Creating a child branch](https://neon.com/docs/manage/branches#create-a-branch) - [Creating a role on a branch](https://neon.com/docs/manage/roles#create-a-role) - [Creating a database on a branch](https://neon.com/docs/manage/databases#create-a-database) - [Reset the branch from its parent](https://neon.com/docs/manage/branches#reset-a-branch-from-parent) - [Performing a restore operation on a branch](https://neon.com/docs/guides/branch-restore) - [Setting the branch as protected](https://neon.com/docs/guides/protected-branches) - Running [Neon CLI](https://neon.com/docs/reference/neon-cli) commands or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) calls that access the branch ## Identifying archived branches Archived branches can be identified by an archive icon on the **Branches** page in the Neon Console: If you select an archived branch on the **Branches** page to view its details, you can see when the branch was archived: Archive and unarchive operations can also be monitored in the Neon Console or using the Neon API. See [Monitoring branch archiving](https://neon.com/docs/guides/branch-archiving#monitoring-branch-archiving). ## About archive storage For Neon projects created in AWS regions, inactive branches are archived in Amazon S3 storage. For Neon projects created in Azure regions, branches are archived in Azure Blob storage. For more information about how archive storage works in Neon, refer to [Archive storage](https://neon.com/docs/introduction/architecture-overview#archive-storage) in our architecture documentation. ## Is branch archiving configurable? Branch archiving thresholds are not configurable. Archiving and unarchiving happen automatically according to the thresholds and conditions described above. If you know when a branch should be deleted, set an expiration date rather than wait for automatic archiving. This guarantees automatic deletion at the specified time and works well for CI/CD pipelines and temporary environments. See [Branch expiration](https://neon.com/docs/guides/branch-expiration) for details. ## Disabling branch archiving You cannot fully disable branch archiving, but you can prevent a branch from being archived by defining it as a **protected branch**. For instructions, see [Set a branch as protected](https://neon.com/docs/manage/branches#set-a-branch-as-protected). Protected branches are supported on Neon paid plans. ## Monitoring branch archiving You can monitor branch archive and unarchive operations from the **System operations** tab on the **Monitoring** page in the Neon Console. Look for the following operations: - `Timeline archive`: The time when the branch archive operation was initiated - `Timeline unarchive`: The time when the branch unarchive operation was initiated For related information, see [System operations](https://neon.com/docs/manage/operations). You can also monitor branch archiving using the Neon CLI or Neon API. Tab: CLI The Neon CLI [branches list](https://neon.com/docs/reference/cli-branches#list) command shows a branch's `Current State`. Branch states include: - `init` - the branch is being created but is not available for querying. - `ready` - the branch is fully operational and ready for querying. Expect normal query response times. - `archived` - the branch is stored in cost-effective archive storage. Expect slow query response times. ```bash neon branches list --project-id green-hat-46829796 ┌───────────────────────────┬──────┬─────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Default │ Current State │ Created At │ ├───────────────────────────┼──────┼─────────┼───────────────┼──────────────────────┤ │ br-muddy-firefly-a7kzf0d4 │ main │ true │ ready │ 2024-10-30T14:59:57Z │ └───────────────────────────┴──────┴─────────┴───────────────┴──────────────────────┘ ``` Tab: API The Neon API's [Get branch details](https://api-docs.neon.tech/reference/getprojectbranch) endpoint can retrieve a branch's state: ```bash curl --request GET \ --url https://console.neon.tech/api/v2/projects/{project-id}/branches/{branch_id} \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` The response includes a `current_state`, a `state_changed_at` timestamp for when the current state began, and a `pending_state` if the branch is currently transitioning between states. State values include: - `init` - the branch is being created but is not available for querying. - `ready` - the branch is fully operational and ready for querying. Expect normal query response times. - `archived` - the branch is stored in cost-effective archive storage. Expect slow query response times. This example shows a branch that is currently `archived`. The `state_changed_at` shows a timestamp indicating when the state last changed. ```json {9,10} { "branch": { "id": "br-broad-smoke-w2sqcu0i", "project_id": "proud-darkness-91591984", "parent_id": "br-falling-glade-w25m64ct", "parent_lsn": "0/1F78F48", "parent_timestamp": "2024-10-02T08:54:18Z", "name": "development", "current_state": "archived", "state_changed_at": "2024-11-06T14:20:58Z", "logical_size": 30810112, "creation_source": "console", "primary": false, "default": false, "protected": false, ... ``` --- # Source: https://neon.com/llms/guides-branch-expiration.txt # Branch expiration > The "Branch Expiration" document outlines the process and settings for managing the automatic expiration of branches in Neon, detailing how to configure expiration policies to optimize resource usage. ## Source - [Branch expiration HTML](https://neon.com/docs/guides/branch-expiration): The original HTML version of this documentation ## Overview Branch expiration allows you to set automatic deletion timestamps on branches. When a branch reaches its expiration time, it is automatically deleted. **Tip** Quick guide: **Console:** When creating a branch, **Automatically delete branch after** is checked by default with 1 day selected. You can choose 1 hour, 1 day, or 7 days, or uncheck to disable. When updating an existing branch, you can select a custom date and time. **CLI:** Use `--expires-at` with [RFC 3339 format](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) (e.g., `2025-07-15T18:02:16Z`). Note: Expiration must be explicitly set; there is no default. **API:** Use `expires_at` with [RFC 3339 format](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) (e.g., `2025-07-15T18:02:16Z`). Note: Expiration must be explicitly set; there is no default. What you will learn: - When and why to use branch expiration - How to set expiration timestamps via Console, CLI, and API - How expiration timestamps and TTL intervals work - Restrictions and best practices Related docs: - [Branching with the Neon CLI](https://neon.com/docs/guides/branching-neon-cli) - [Branching with the Neon API](https://neon.com/docs/manage/branches#branching-with-the-neon-api) - [Manage branches](https://neon.com/docs/manage/branches) - [Branching workflows](https://neon.com/docs/introduction/branching#branching-workflows) ## Why use branch expiration? Branch expiration is ideal for temporary branches that have predictable lifespans: - **CI/CD environments** - Test branches that should clean up after pipeline completion - **Feature development** - Time-boxed feature branches with known deadlines - **Automated testing** - Ephemeral test environments created by scripts - **AI workflows** - Temporary environments managed without human intervention Without automatic expiration, these branches accumulate over time, increasing storage costs and project clutter. **Tip**: Example expiration durations: CI/CD pipelines (2-4 hours), demos (24-48 hours), feature development (1-7 days), long-term testing (30 days). ## How it works Branch expiration uses a time-to-live (TTL) model. When you set an expiration on a branch, you're defining how long the branch should exist before automatic deletion. When you set an expiration timestamp on a branch: 1. The system stores both: - **Expiration timestamp** (`expires_at`) - The scheduled date and time when the branch will be deleted - **TTL interval** (`ttl_interval_seconds`) - The duration between creation/update and expiration (e.g., 24 hours = 86400 seconds), a read-only value 2. A background process monitors branches and deletes them after their expiration time is reached 3. If you reset a branch from its parent, the TTL countdown restarts using the original interval **Important**: Branch deletion is permanent and cannot be recovered. All associated data and compute endpoints are also deleted. Verify expiration times carefully before setting them. ## Setting branch expiration You can set, update, or remove expiration timestamps through three interfaces: - **Console** - When creating a branch, **Automatically delete branch after** is checked by default with 1 day selected. You can choose 1 hour, 1 day, or 7 days, or uncheck to disable. When updating an existing branch, you can select a custom date and time. - **CLI** - Use the `--expires-at` flag when creating or updating a branch with [RFC 3339](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) format. Note: Expiration must be explicitly set; there is no default. - **API** - Use the `expires_at` parameter with [RFC 3339](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) format. Note: Expiration must be explicitly set; there is no default. See the [Examples](https://neon.com/docs/guides/branch-expiration#examples) section below for detailed usage of each method. ## Timestamp format requirements The `expires_at` parameter must use [RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) format with second-level precision: **Format patterns:** ``` YYYY-MM-DDTHH:MM:SSZ (UTC) YYYY-MM-DDTHH:MM:SS+HH:MM (Positive UTC offset) YYYY-MM-DDTHH:MM:SS-HH:MM (Negative UTC offset) ``` **Valid examples:** - `2025-07-15T18:02:16Z` (UTC) - `2025-07-15T18:02:16-05:00` (Eastern Standard Time) - `2025-07-15T18:02:16+09:00` (Japan Standard Time) **Requirements:** - Time zone is required: use either `Z` for UTC or a numeric offset like `+05:00` - Fractional seconds are optional but only second precision is stored - Timestamp must be in the future - Maximum expiration is 30 days from the current time **Note**: Common errors include missing timezone (`2025-07-15T18:02:16`), past timestamps, or combining `Z` with offset (`2025-07-15T18:02:16Z-05:00`). ## Restrictions To maintain system integrity, expiration timestamps cannot be added to: - **Protected branches** - Cannot expire protected branches or protect branches with expiration - **Default branches** - Cannot expire default branches or set expiring branches as default - **Parent branches** - Cannot expire branches that have children or create children from expiring branches Branch expiration is not supported with these Neon features: - **Data API** - **Neon Auth** **Note**: When a branch expires and is deleted, all associated compute endpoints are also deleted. Ensure any critical workloads are migrated before expiration. ## Examples ### Creating a branch with expiration Tab: Console 1. Navigate to the **Branches** page in the Console 2. Click **New branch** 3. Enter branch name and select parent branch 4. By default, **Automatically delete branch after** is checked with 1 day selected. You can choose 1 hour, 1 day, or 7 days, or uncheck to disable. 5. Click **Create** Tab: CLI ```bash {6,15} # Create branch expiring at specific date/time neon branches create \ --project-id \ --name feature-test \ --parent development \ --expires-at "2026-01-29T18:02:16Z" # Create branch expiring in 2 hours (using dynamic date) # Linux/GNU: $(date -u -d '+2 hours' +%Y-%m-%dT%H:%M:%SZ) # macOS/BSD: $(date -u -v+2H +%Y-%m-%dT%H:%M:%SZ) neon branches create \ --project-id \ --name ci-test \ --parent development \ --expires-at "$(date -u -d '+2 hours' +%Y-%m-%dT%H:%M:%SZ)" ``` Tab: API ```bash {11,21,22} # Create branch that expires in 24 hours curl --request POST \ --url https://console.neon.tech/api/v2/projects/{project_id}/branches \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data '{ "branch": { "name": "feature-test", "parent_id": "br-main-12345", "expires_at": "2026-01-29T18:02:16Z" } }' # Example response { "branch": { "id": "br-feature-67890", "name": "feature-test", "parent_id": "br-main-12345", "expires_at": "2026-01-29T18:02:16Z", "ttl_interval_seconds": 86400, "created_at": "2026-01-28T18:02:16Z" } } ``` ### Updating branch expiration Tab: Console 1. Navigate to the **Branches** page in the Console 2. Choose the **Update expiration** option for your branch 3. To update: Select a new date and time 4. To remove: Uncheck **Automatically delete branch after** 5. Click **Save** Tab: CLI ```bash {4,12,18} # Update expiration to new timestamp neon branches set-expiration \ \ --expires-at "2026-01-29T12:00:00Z" \ --project-id # Extend expiration by 7 days from now # Linux/GNU: $(date -u -d '+7 days' +%Y-%m-%dT%H:%M:%SZ) # macOS/BSD: $(date -u -v+7d +%Y-%m-%dT%H:%M:%SZ) neon branches set-expiration \ \ --expires-at "$(date -u -d '+7 days' +%Y-%m-%dT%H:%M:%SZ)" \ --project-id # Remove expiration from a branch neon branches set-expiration \ \ --expires-at null \ --project-id ``` Tab: API ```bash {9,21} # Update branch expiration to specific date curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id} \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data '{ "branch": { "expires_at": "2026-01-29T12:00:00Z" } }' # Remove expiration from a branch curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id} \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data '{ "branch": { "expires_at": null } }' ``` ### Retrieving branch information Check expiration status of your branches: Tab: Console 1. Navigate to the **Branches** page in the Console 2. Click on the desired branch to open the **Branch Overview** 3. See information similar to the following if branch expiration is set: Tab: CLI ```bash neon branches info --project-id ``` Tab: API ```bash curl --request GET \ --url https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id} \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" ``` ## API reference ### Create project branch [`POST /projects/{project_id}/branches`](https://api-docs.neon.tech/reference/createprojectbranch) - **`expires_at`** (optional) - Timestamp for automatic deletion in [RFC 3339](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) format ### Update project branch [`PATCH /projects/{project_id}/branches/{branch_id}`](https://api-docs.neon.tech/reference/updateprojectbranch) - **`expires_at`** (optional, nullable) - Update or remove expiration - Timestamp value: Sets/updates expiration - `null`: Removes expiration - Omitted: No change ### Response fields Branches with expiration include two key fields: - **`expires_at`** - The scheduled deletion timestamp ([RFC 3339](https://neon.com/docs/guides/branch-expiration#timestamp-format-requirements) format) - **`ttl_interval_seconds`** - The original TTL duration in seconds (read-only) #### How these fields work together When you create a branch with a TTL of 24 hours, `ttl_interval_seconds` is set to 86400 (seconds). The `expires_at` value is calculated as creation time plus 24 hours. If you reset the branch from its parent, the `expires_at` value is recalculated using the preserved `ttl_interval_seconds` value, starting from the reset time. The interval itself remains unchanged. **Example response:** ```json {4,5} { "branch": { "id": "br-feature-67890", "expires_at": "2026-01-29T18:02:16Z", "ttl_interval_seconds": 86400, "created_at": "2026-01-28T18:02:16Z" } } ``` In this example, the branch will be deleted 24 hours after creation. --- # Source: https://neon.com/llms/guides-branching-github-actions.txt # Automate branching with GitHub Actions > The document "Automate branching with GitHub Actions" guides Neon users on setting up GitHub Actions to automate the creation and management of database branches, streamlining workflows within Neon's environment. ## Source - [Automate branching with GitHub Actions HTML](https://neon.com/docs/guides/branching-github-actions): The original HTML version of this documentation Neon provides a set of GitHub Actions to automate the creation, deletion, and management of database branches in your Neon projects. These actions allow you to automate database branching as part of your CI/CD workflows, enabling you to create ephemeral database branches for pull requests, run tests against isolated data, and clean up resources automatically. This guide covers how to set up and use the Neon GitHub Actions for managing database branches, including creating, deleting, resetting branches, and comparing schemas. ## Getting started To use Neon's GitHub Actions, you need to add your Neon API key and Project ID to your GitHub repository. This allows the actions to authenticate with your Neon project and perform operations on your database branches. ### Automatically set up with the Neon GitHub integration The easiest way to get started is with the [Neon GitHub integration](https://neon.com/docs/guides/neon-github-integration). It connects your Neon project to a GitHub repository, automatically creating the necessary `NEON_API_KEY` secret and `NEON_PROJECT_ID` variable for you. If you use the integration, you can skip the manual setup steps below. ### Manually set up your repository 1. **Create a Neon API key.** For instructions, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). 2. **Add the key to GitHub.** In your GitHub repository, navigate to **Settings** > **Secrets and variables** > **Actions**. 3. Click **New repository secret**. 4. Name the secret `NEON_API_KEY` and paste your API key into the value field. 5. Click **Add secret**. 6. You will also need your Neon **Project ID**, which you can find in the **Settings** page of the Neon console. 7. Add the Project ID to your GitHub repository as a **variable**: - In your GitHub repository, navigate to **Settings** > **Secrets and variables** > **Actions**. - Select **Variables** and click **New repository variable**. - Name the variable `NEON_PROJECT_ID` and set its value to your Neon Project ID. - Click **Add variable**. You can now use the Neon GitHub Actions in your workflows by referencing them in your `.github/workflows` YAML files. ## Available actions Neon provides the following GitHub Actions for working with Neon branches. For detailed information on usage, inputs, and outputs, please refer to the official documentation for each action on the GitHub Marketplace. - **[Create branch action](https://github.com/marketplace/actions/neon-create-branch-github-action)**: Creates a new database branch in your Neon project. This is ideal for setting up isolated environments for preview deployments or running tests against a feature branch. - **[Delete branch action](https://github.com/marketplace/actions/neon-database-delete-branch)**: Deletes a specified database branch. Use this to automate the cleanup of ephemeral branches after a pull request is merged or closed. - **[Reset branch action](https://github.com/marketplace/actions/neon-database-reset-branch-action)**: Resets a branch to the latest state of its parent. This is useful for refreshing a development or staging branch with the most up-to-date data. - **[Schema diff action](https://github.com/marketplace/actions/neon-schema-diff-github-action)**: Compares the schemas of two branches and posts a diff summary as a comment on a pull request, allowing for easy review of schema changes. For detailed information on how to use these actions, including required inputs, outputs, and examples, check the individual actions documentation on GitHub Marketplace: - [Create branch action](https://github.com/marketplace/actions/neon-create-branch-github-action): Creates a new database branch. Ideal for setting up isolated environments for preview deployments or feature testing. - [Delete branch action](https://github.com/marketplace/actions/neon-database-delete-branch): Deletes a specified database branch. Use this to clean up ephemeral branches after a pull request is merged or closed. - [Reset branch action](https://github.com/marketplace/actions/neon-database-reset-branch-action): Resets a branch to the latest state of its parent. Useful for refreshing a development branch with production data. - [Schema diff action](https://github.com/marketplace/actions/neon-schema-diff-github-action): Compares the schema of two branches and posts a diff summary as a comment on a pull request. ## Example applications For complete, deployable examples, explore these starter repositories: - [Preview branches with Cloudflare Pages](https://github.com/neondatabase/preview-branches-with-cloudflare): Demonstrates using GitHub Actions workflows to create a Neon branch for every Cloudflare Pages preview deployment - [Preview branches with Vercel](https://github.com/neondatabase/preview-branches-with-vercel): Demonstrates using GitHub Actions workflows to create a Neon branch for every Vercel preview deployment - [Preview branches with Fly.io](https://github.com/neondatabase/preview-branches-with-fly): Demonstrates using GitHub Actions workflows to create a Neon branch for every Fly.io preview deployment - [Neon Twitter app](https://github.com/neondatabase/neon_twitter): Demonstrates using GitHub Actions workflows to create a Neon branch for schema validation and perform migrations --- # Source: https://neon.com/llms/guides-branching-intro.txt # Get started with branching > This document introduces Neon users to the branching feature, detailing how to create and manage branches within their database environments to facilitate development workflows. ## Source - [Get started with branching HTML](https://neon.com/docs/guides/branching-intro): The original HTML version of this documentation Find detailed information and instructions about Neon's branching feature and how you can integrate branching with your development workflows. ## What is branching? Learn about branching and how you can apply it in your development workflows. - [Learn about branching](https://neon.com/docs/introduction/branching): Learn about Neon's branching feature and how to use it in your development workflows - [Database branching for Postgres](https://neon.com/blog/database-branching-for-postgres-with-neon): Blog: Read about how Neon's branching feature works and what it means for your workflows - [Branch archiving](https://neon.com/docs/guides/branch-archiving): Learn how Neon automatically archives inactive branches to cost-effective storage - [Schema-only branches](https://neon.com/docs/guides/branching-schema-only): Learn how you can protect sensitive data with schema-only branches ## Automate branching Integrate branching into your CI/CD pipelines and workflows with the Neon API, CLI, GitHub Actions, and Githooks. - [Branching with the Neon API](https://neon.com/docs/guides/branching-neon-api): Learn how to instantly create and manage branches with the Neon API - [Branching with the Neon CLI](https://neon.com/docs/guides/branching-neon-cli): Learn how to instantly create and manage branches with the Neon CLI - [Branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions): Automate branching with Neon's GitHub Actions for branching - [Branching with Githooks](https://neon.com/blog/automating-neon-branch-creation-with-githooks): Blog: Learn how to automating branch creation with Githooks ## Preview deployments Create a branch for each preview deployment with the [Neon-managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration). - [The Neon-Managed Vercel Integration](https://neon.com/docs/guides/neon-managed-vercel-integration): Connect your Vercel project and create a branch for each preview deployment - [Preview deployments with Vercel](https://neon.com/blog/neon-vercel-integration): Blog: Read about full-stack preview deployments using the Neon Vercel Integration - [A database for every preview](https://neon.com/blog/branching-with-preview-environments): Blog: A database for every preview environment with GitHub Actions and Vercel ## Test queries Test potentially destructive or performance-impacting queries before your run them in production. - [Branching — Testing queries](https://neon.com/docs/guides/branching-test-queries): Instantly create a branch to test queries before running them in production ## Data recovery and audits Recover lost data or track down issues by restoring a branch to its history, or just create a point-in-time branch for historical analysis or any other reason. - [Instant restore with Time Travel Assist](https://neon.com/docs/guides/branch-restore): Learn how to instantly recover your database to any point in time within your restore window - [Time Travel](https://neon.com/docs/guides/time-travel-assist): Query point-in-time connections with Time Travel - [Schema diff](https://neon.com/docs/guides/schema-diff): Visualize schema differences between branches to help with troubleshooting ## Example applications Explore example applications that use Neon's branching feature. - [Time Travel Demo](https://github.com/kelvich/branching_demo_bisect): Use Neon branching, the Neon API, and a bisect script to recover lost data - [Neon Twitter app](https://github.com/neondatabase/neon_twitter): Use GitHub Actions to create and delete a branch with each pull request - [Preview branches app](https://github.com/neondatabase/preview-branches-with-vercel): An application demonstrating using GitHub Actions with preview deployments in Vercel --- # Source: https://neon.com/llms/guides-branching-neon-api.txt # Branching with the Neon API > The document "Branching with the Neon API" explains how to use the Neon API to create and manage database branches, enabling users to efficiently handle multiple development environments within Neon. ## Source - [Branching with the Neon API HTML](https://neon.com/docs/guides/branching-neon-api): The original HTML version of this documentation The examples in this guide demonstrate creating, viewing, and deleting branches using the Neon API. For other branch-related API methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Note**: The API examples that follow may only show some of the user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to the method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` program specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ## Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the examples below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. ## Create a branch with the API The following Neon API method creates a branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createprojectbranch). ```http POST /projects/{project_id}/branches ``` The API method appears as follows when specified in a cURL command: **Note**: This method does not require a request body. Without a request body, the method creates a branch from the project's default branch, and a compute is not created. ```bash curl 'https://console.neon.tech/api/v2/projects//branches' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoints": [ { "type": "read_write" } ], "branch": { "parent_id": "br-wispy-dew-591433" } }' | jq ``` - The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. It is a generated value that looks something like this: `autumn-disk-484331`. - The `endpoints` attribute creates a compute, which is required to connect to the branch. Neon supports `read_write` and `read_only` compute types. A branch can be created with or without a compute. You can specify `read_only` to create a [read replica](https://neon.com/docs/guides/read-replica-guide). - The `branch` attribute specifies the parent branch. - The `parent_id` can be obtained by listing the branches for your project. See [List branches](https://neon.com/docs/guides/branching-neon-api#list-branches-with-the-api). The `parent_id` is the `id` of the branch you are branching from. A branch `id` has a `br-` prefix. You can branch from your Neon project's default branch or a non-default branch. The response includes information about the branch, the branch's compute, and the `create_branch` and `start_compute` operations that were initiated. ```json { "branch": { "id": "br-dawn-scene-747675", "project_id": "autumn-disk-484331", "parent_id": "br-wispy-dew-591433", "parent_lsn": "0/1AA6408", "name": "br-dawn-scene-747675", "current_state": "init", "pending_state": "ready", "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" }, "endpoints": [ { "host": "ep-small-bush-675287.us-east-2.aws.neon.tech", "id": "ep-small-bush-675287", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-2", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": { "pg_settings": {} }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z", "proxy_host": "us-east-2.aws.neon.tech" } ], "operations": [ { "id": "22acbb37-209b-4b90-a39c-8460090e1329", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "action": "create_branch", "status": "running", "failures_count": 0, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" }, { "id": "055b17e6-ffe3-47ab-b545-cfd7db6fd8b8", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "endpoint_id": "ep-small-bush-675287", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" } ] } ``` ## List branches with the API The following Neon API method lists branches for the specified project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojectbranches). ```http GET /projects/{project_id}/branches ``` The API method appears as follows when specified in a cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/branches' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. The response lists the project's default branch and any child branches. The name of the default branch in this example is `main`. Response: ```json { "branches": [ { "id": "br-dawn-scene-747675", "project_id": "autumn-disk-484331", "parent_id": "br-wispy-dew-591433", "parent_lsn": "0/1AA6408", "name": "br-dawn-scene-747675", "current_state": "ready", "logical_size": 28, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" }, { "id": "br-wispy-dew-591433", "project_id": "autumn-disk-484331", "name": "main", "current_state": "ready", "logical_size": 28, "physical_size": 31, "created_at": "2022-12-07T00:45:05Z", "updated_at": "2022-12-07T00:45:05Z" } ] } ``` ## Delete a branch with the API The following Neon API method deletes the specified branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteprojectbranch). ```http DELETE /projects/{project_id}/branches/{branch_id} ``` The API method appears as follows when specified in a cURL command: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/branches/br-dawn-scene-747675' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` - The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. - The `branch_id` can be found by listing the branches for your project. The `` is the `id` of a branch. A branch `id` has a `br-` prefix. See [List branches](https://neon.com/docs/guides/branching-neon-api#list-branches-with-the-api). The response shows information about the branch being deleted and the `suspend_compute` and `delete_timeline` operations that were initiated. ```json { "branch": { "id": "br-dawn-scene-747675", "project_id": "autumn-disk-484331", "parent_id": "br-shy-meadow-151383", "parent_lsn": "0/1953508", "name": "br-flat-darkness-194551", "current_state": "ready", "created_at": "2022-12-08T20:01:31Z", "updated_at": "2022-12-08T20:01:31Z" }, "operations": [ { "id": "c7ee9bea-c984-41ac-8672-9848714104bc", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "endpoint_id": "ep-small-bush-675287", "action": "suspend_compute", "status": "running", "failures_count": 0, "created_at": "2022-12-08T20:01:31Z", "updated_at": "2022-12-08T20:01:31Z" }, { "id": "41646f65-c692-4621-9538-32265f74ffe5", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "action": "delete_timeline", "status": "scheduling", "failures_count": 0, "created_at": "2022-12-06T01:12:10Z", "updated_at": "2022-12-06T01:12:10Z" } ] } ``` You can verify that a branch is deleted by listing the branches for your project. See [List branches](https://neon.com/docs/guides/branching-neon-api#list-branches-with-the-api). The deleted branch should no longer be listed. ## Restoring a branch using the API To revert changes or recover lost data, you can use the branch restore endpoint in the Neon API. ```bash POST /projects/{project_id}/branches/{branch_id_to_restore}/restore ``` For details on how to use this endpoint to restore a branch to its own or another branch's history, restore a branch to the head of its parent, and other restore options, see [Instant restore using the API](https://neon.com/docs/guides/branch-restore#how-to-use-branch-restore). ## Creating a schema-only branch using the API **Note**: The API is in Beta and subject to change. To create a schema-only branch using the Neon API, use the [Create branch](https://api-docs.neon.tech/reference/createprojectbranch) endpoint with the `init_source` option set to `schema-only`, as shown below. Required values include: - Your Neon `project_id` - The `parent_id`, which is the branch ID of the branch containing the schema you want to copy ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/wispy-salad-58347608/branches \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "branch": { "parent_id": "br-super-mode-w371g4od", "name": "my_schema_only_branch", "init_source": "schema-only" } } ' ``` ## Creating a branch with expiration using the API **Comingsoon** Feature coming soon: This feature is currently available to members in our Early Access Program. Sign up [here](https://console.neon.tech/app/settings/early-access) or from your user profile settings in the [Neon Console](https://console.neon.tech/app/settings/early-access). To create a branch with an automatic expiration timestamp using the Neon API, use the [Create branch](https://api-docs.neon.tech/reference/createprojectbranch) endpoint with the `expires_at` option. When a branch reaches its expiration time, it is automatically deleted. Required values include: - Your Neon `project_id` - The `parent_id`, which is the branch ID of the branch you want to branch from - The `expires_at` timestamp in [RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) format including a time zone (Z or offset) - The expiration must be in the future and no more than 30 days from now ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/wispy-salad-58347608/branches \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data '{ "branch": { "name": "feature-test-branch", "parent_id": "br-super-mode-w371g4od", "expires_at": "2024-12-15T18:02:16Z" }, "endpoints": [ { "type": "read_write" } ] }' ``` Example response (partial): ```json { "branch": { "id": "br-feature-67890", "name": "feature-test-branch", "expires_at": "2024-12-15T18:02:16Z", "ttl_interval_seconds": 604800, "created_at": "2024-12-08T18:02:16Z" } } ``` Key response fields for branch expiration: - The `expires_at` field shows the scheduled deletion timestamp in RFC 3339 format - The `ttl_interval_seconds` field is the original expiration interval, in seconds (read-only) For more detailed information about branch expiration, including updating and removing expiration timestamps, see [Branch expiration](https://neon.com/docs/guides/branch-expiration). --- # Source: https://neon.com/llms/guides-branching-neon-cli.txt # Branching with the Neon CLI > The document "Branching with the Neon CLI" details how to use the Neon Command Line Interface to create and manage database branches, enabling users to efficiently handle multiple development environments within Neon. ## Source - [Branching with the Neon CLI HTML](https://neon.com/docs/guides/branching-neon-cli): The original HTML version of this documentation The examples in this guide demonstrate creating, viewing, and deleting branches using the Neon CLI. For other branch-related CLI commands, refer to [Neon CLI commands — branches](https://neon.com/docs/reference/cli-branches). This guide also describes how to use the `--api-key` option to authenticate CLI branching commands from the command line. The examples show the default `table` output format. The Neon CLI also supports `json` and `yaml` output formats. For example, if you prefer output in `json`, add `--output json` to your Neon CLI command. ## Prerequisites - The Neon CLI. See [Install the Neon CLI](https://neon.com/docs/reference/cli-install) for instructions. - To run CLI commands, you must either authenticate through your browser or supply an API key using the `--api-key` option. See [Connect with the Neon CLI](https://neon.com/docs/reference/neon-cli#connect). ## Create a branch with the CLI The following Neon CLI command creates a branch. If your Neon account has more than one project, you will be required to specify a project ID using the `--project-id` option. To view the CLI documentation for this command, refer to the [Neon CLI reference](https://neon.com/docs/reference/cli-branches#create). The command response includes the branch ID, the compute endpoint ID, and the connection URI for connecting to the branch. **Tip**: You can use the `--name` option with a `neon branches create` command to specify your own branch name instead of using the name generated by Neon. For example: `neon branches create --name mybranch`. Also, for any Neon CLI command, you can specify `--output json` to change the command output from the default table format to JSON format. ```bash neon branches create branch ┌───────────────────────┬───────────────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├───────────────────────┼───────────────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-lucky-mud-08878834 │ br-lucky-mud-08878834 │ false │ 2023-07-24T20:22:42Z │ 2023-07-24T20:22:42Z │ └───────────────────────┴───────────────────────┴─────────┴──────────────────────┴──────────────────────┘ endpoints ┌────────────────────────┬──────────────────────┐ │ Id │ Created At │ ├────────────────────────┼──────────────────────┤ │ ep-mute-voice-52609794 │ 2023-07-24T20:22:42Z │ └────────────────────────┴──────────────────────┘ connection_uris ┌───────────────────────────────────────────────────────────────────────────────────────┐ │ Connection Uri │ ├───────────────────────────────────────────────────────────────────────────────────────┤ │ postgresql://[user]:[password]@[neon_hostname]/[dbname] │ └───────────────────────────────────────────────────────────────────────────────────────┘ ``` **Tip**: The Neon CLI provides a `neon connection-string` command you can use to extract a connection uri programmatically. See [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string). ## Create a branch from a non-default parent Using the option `--parent`, you can specify any non-default branch that you want to use as the parent for your new branch, depending on the needs of your development workflow. In this example, we're creating a hotfix branch called `hotfix/critical-fix` using the `development` branch as the parent: ```bash neon branches create --name hotfix/critical-fix --parent development --project-id crimson-voice-12345678 branch ┌───────────────────────┬─────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├───────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-misty-mud-a5poo34s │ hotfix/critical-fix │ false │ 2024-04-23T17:04:10Z │ 2024-04-23T17:04:10Z │ └───────────────────────┴─────────────┴─────────┴──────────────────────┴──────────────────────┘ endpoints ┌──────────────────────────┬──────────────────────┐ │ Id │ Created At │ ├──────────────────────────┼──────────────────────┤ │ ep-orange-heart-123456 │ 2024-04-23T17:04:10Z │ └──────────────────────────┴──────────────────────┘ connection_uris ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Connection Uri │ ├──────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ postgresql://neondb_owner:123456@ep-orange-heart-a54grm9j.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require │ └──────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` ## List branches with the CLI The following Neon CLI command lists all branches in your Neon project, as well as any branches shared with you. If your Neon account has more than one project, you will be required to specify a project ID using the `--project-id` option. To view the CLI documentation for this method, refer to the [Neon CLI reference](https://neon.com/docs/reference/cli-branches#list). ```bash neon projects list Projects ┌────────────────────────┬────────────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ crimson-voice-12345678 │ frontend │ aws-us-east-2 │ 2024-04-15T11:17:30Z │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ calm-thunder-12121212 │ backend │ aws-us-east-2 │ 2024-04-10T15:21:01Z │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ nameless-hall-87654321 │ billing │ aws-us-east-2 │ 2024-04-10T14:35:17Z │ └────────────────────────┴────────────────────┴───────────────┴──────────────────────┘ Shared with you ┌───────────────────┬────────────────────┬──────────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼────────────────────┼──────────────────┼──────────────────────┤ │ noisy-fire-212121 │ API │ aws-eu-central-1 │ 2023-04-22T18:41:13Z │ └───────────────────┴────────────────────┴──────────────────┴──────────────────────┘ ``` ## Delete a branch with the CLI The following Neon CLI command deletes the specified branch. If your Neon account has more than one project, you will be required to specify a project ID using the `--project-id` option. To view the CLI documentation for this command, refer to the [Neon CLI reference](https://neon.com/docs/reference/cli-branches#delete). You can delete a branch by its ID or name. ```bash neon branches delete br-rough-sky-158193 ┌───────────────────────┬───────────────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├───────────────────────┼───────────────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-lucky-mud-08878834 │ br-lucky-mud-08878834 │ false │ 2023-07-24T20:22:42Z │ 2023-07-24T20:44:51Z │ └───────────────────────┴───────────────────────┴─────────┴──────────────────────┴──────────────────────┘ ``` ## Branching automation with the Neon CLI The Neon CLI enables easy automation of branching operations for integration into your workflows or toolchains. To facilitate authentication to Neon when running a CLI command, the Neon CLI allows you to use an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). To use an API key, you can store it in an environment variable on your system. This prevents the key from being hardcoded into your automation scripts or exposed in another way. For example, you can add the following line to your shell's profile file (`.bashrc` or `.bash_profile` for bash shell): ```bash export NEON_API_KEY= ``` After exporting your key, source the profile file (source `~/.bashrc` or source `~/.bash_profile`), or start a new terminal session. You do not need to specify the variable name explicitly when using a Neon CLI command. A Neon CLI command looks for a `NEON_API_KEY` variable setting by default. This API key configuration ensures that the API key is kept secure while still providing a way to authenticate your CLI commands. Remember, you should handle your API key with the same level of security as your other credentials. ## Branch expiration For temporary environments, create branches with `--expires-at` to set a TTL for automatic deletion instead of manual cleanup: ```bash # Create a branch that expires at a specific date and time neon branches create --project-id --name ci-test --parent --expires-at "2025-07-15T18:02:16Z" # Create a branch that expires in 2 hours (Linux/GNU) neon branches create --project-id --name ci-test --parent --expires-at "$(date -u -d '+2 hours' +%Y-%m-%dT%H:%M:%SZ)" # Create a branch that expires in 2 hours (macOS/BSD) neon branches create --project-id --name ci-test --parent --expires-at "$(date -u -v+2H +%Y-%m-%dT%H:%M:%SZ)" ``` You can also update or remove expiration from existing branches: ```bash # Update expiration to a new timestamp neon branches set-expiration --expires-at "2025-07-20T12:00:00Z" --project-id # Remove expiration from a branch neon branches set-expiration --expires-at null --project-id ``` For details and configuration instructions, refer to our [Branch expiration guide](https://neon.com/docs/guides/branch-expiration). ## Resetting a branch from its parent Depending on your development workflow, you might need to periodically reset a branch to match the latest state of its parent. This is useful, for example, when resetting a development branch back to the production branch before starting work on a new feature. Use the following command to reset a branch to the current state (HEAD) of its parent branch: ```bash neon branches reset --parent ``` Example: This example resets a feature branch to match the latest state of its parent branch: ```bash neon branches reset feature/user-auth --parent ┌────────────────────────────┬───────────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Last Reset At │ ├────────────────────────────┼───────────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-twilight-smoke-123456 │ feature/user-auth │ false │ 2024-04-23T17:01:49Z │ 2024-04-23T17:57:35Z │ ``` **Note**: **Branch expiration behavior:** When you reset a branch that has an expiration set, the expiration timer restarts from the reset time using the original duration. For example, if your branch was originally set to expire in 24 hours, resetting gives it another full 24 hours from the reset time. This process recalculates the new `expires_at` value using the preserved `ttl_interval_seconds`, but the TTL interval itself remains unchanged. For more details, see [branch expiration](https://neon.com/docs/guides/branch-expiration). If the branch you want to reset has child branches, you need to include the `preserve-under-name` parameter. This will save the current state of your branch under a new name before performing the reset. The child branches will then show this newly named branch as their parent. This step ensures that your original branch can be reset cleanly, as all child branches will have been transferred to the new parent name. For example, here we are resetting `feature/user-auth` to its parent while preserving its latest state under the branch name `feature/user-auth-backup`: ```bash neon branches reset feature/user-auth --parent --preserve-under-name feature/user-auth-backup ┌────────────────────────────┬──────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Last Reset At │ ├────────────────────────────┼──────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-twilight-smoke-a5ofkxry │ feature/user-auth │ false │ 2024-04-23T17:01:49Z │ 2024-04-23T18:02:36Z │ ``` For more details, see [Reset from parent](https://neon.com/docs/guides/reset-from-parent). ## Restoring a branch to its own or another branch's history Using the CLI, you can restore a branch to an earlier point in its history or another branch's history using the following command: ```bash neon branches restore ``` This command restores the branch `production` to an earlier timestamp in its own history, saving to a backup branch called `production_restore_backup_2024-02-20` ```bash neon branches restore production ^self@2024-05-06T10:00:00.000Z --preserve-under-name production_restore_backup_2024-05-06 ``` Results of the operation: ```bash INFO: Restoring branch br-purple-dust-a5hok5mk to the branch br-purple-dust-a5hok5mk timestamp 2024-05-06T10:00:00.000Z Restored branch ┌─────────────────────────┬──────┬──────────────────────┐ │ Id │ Name │ Last Reset At │ ├─────────────────────────┼──────┼──────────────────────┤ │ br-purple-dust-a5hok5mk │ main │ 2024-05-07T09:45:21Z │ └─────────────────────────┴──────┴──────────────────────┘ Backup branch ┌─────────────────────────┬────────────────────────────────┐ │ Id │ Name │ ├─────────────────────────┼────────────────────────────────┤ │ br-flat-forest-a5z016gm │ production_restore_backup_2024-05-06 │ └─────────────────────────┴────────────────────────────────┘ ``` For full details about the different restore options available with this command, see [Restoring using the CLI](https://neon.com/docs/guides/branch-restore#how-to-use-branch-restore). --- # Source: https://neon.com/llms/guides-branching-schema-only.txt # Schema-only branches > The document explains how to create schema-only branches in Neon, allowing users to branch database schemas without duplicating data, facilitating efficient development and testing workflows. ## Source - [Schema-only branches HTML](https://neon.com/docs/guides/branching-schema-only): The original HTML version of this documentation **Note** Beta: This feature is in Beta. Please give us [Feedback](https://console.neon.tech/app/projects?modal=feedback) from the Neon Console or by connecting with us on [Discord](https://discord.gg/92vNTzKDGp). Neon supports creating schema-only branches, letting you create branches that replicate only the database schema from a source branch — without copying any of the actual data. This feature is ideal for working with confidential information. Instead of duplicating this sensitive data, you can now create a branch with just the database structure and populate it with randomized or anonymized data instead. This provides your team with a secure and compliant environment for developing and testing using Neon branches. ## Creating schema-only branches You can create schema-only branches in the Neon Console or using the Neon API, in much the same way you create any Neon branch. Support for the Neon CLI will come in a future release. Tab: Neon Console To create a schema-only branch from the Neon Console: 1. In the console, select your project. 2. Select **Branches**. 3. Click **Create branch** to open the branch creation dialog. 4. Under **Include**, Select the **Schema-only** option. 5. Provide a name for the branch. 6. In the **From Branch** field, select the source branch. The schema from the source branch will be copied to your new schema-only branch. 7. Click **Create branch**. Tab: CLI To create a schema-only branch using the Neon CLI: ```bash neon branch create --schema-only ``` If you have more than one project, you'll need to specify the `--project-id` option. See [Neon CLI - branch create](https://neon.com/docs/reference/cli-branches#create). Tab: API **Note**: The API is in Beta and subject to change. To create a schema-only branch using the Neon API, use the [Create branch](https://api-docs.neon.tech/reference/createprojectbranch) endpoint with the `init_source` option set to `schema-only`, as shown below. Required values include: - Your Neon `project_id` - The `parent_id`, which is the branch ID of the branch containing the schema you want to copy ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/wispy-salad-58347608/branches \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "branch": { "parent_id": "br-super-mode-w371g4od", "name": "my_schema_only_branch", "init_source": "schema-only" } } ' ``` ## Schema-only branching example To try out schema-only branches: 1. Start by creating an `employees` table on your Neon project's `main` branch and adding some dummy data. You can do this from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client by copying and pasting the following statements: ```sql CREATE TABLE employees ( employee_id SERIAL PRIMARY KEY, first_name VARCHAR(50), last_name VARCHAR(50), email VARCHAR(100), phone_number VARCHAR(15), job_title VARCHAR(50), salary NUMERIC(10, 2), hire_date DATE ); INSERT INTO employees (first_name, last_name, email, phone_number, job_title, salary, hire_date) VALUES ('John', 'Doe', 'john.doe@example.com', '123-456-7890', 'Software Engineer', 95000.00, '2020-01-15'), ('Jane', 'Smith', 'jane.smith@example.com', '987-654-3210', 'Product Manager', 110000.00, '2019-03-22'), ('Alice', 'Johnson', 'alice.johnson@example.com', '555-123-4567', 'HR Specialist', 65000.00, '2021-06-10'), ('Bob', 'Brown', 'bob.brown@example.com', '555-987-6543', 'Data Analyst', 78000.00, '2018-09-05'), ('Charlie', 'Davis', 'charlie.davis@example.com', '444-555-6666', 'Marketing Manager', 95000.00, '2017-11-14'), ('Diana', 'Miller', 'diana.miller@example.com', '333-444-5555', 'Sales Representative', 72000.00, '2022-04-18'), ('Edward', 'Wilson', 'edward.wilson@example.com', '222-333-4444', 'DevOps Engineer', 98000.00, '2020-12-03'), ('Fiona', 'Clark', 'fiona.clark@example.com', '111-222-3333', 'UI/UX Designer', 85000.00, '2016-08-29'), ('George', 'Harris', 'george.harris@example.com', '999-888-7777', 'Financial Analyst', 90000.00, '2021-01-11'), ('Hannah', 'Martin', 'hannah.martin@example.com', '888-777-6666', 'Backend Developer', 92000.00, '2019-07-23'); ``` 2. Navigate to the **Tables** page in the Neon Console, and select your `main` branch from the bread-crumb menu at the top of the console. Your `employees` table will have both schema and data, as shown here: 3. Create a schema-only branch following the instructions above. See [Creating schema-only branches](https://neon.com/docs/guides/branching-schema-only#creating-schema-only-branches). In this example, we've named the branch `employees_schema_only`. 4. On the **Tables** page, select your newly created `employees_schema_only` branch from the bread-crumb menu at the top of the console. You can see that the schema-only branch contains the schema, but no data. The same will be true for any table in any database on the schema-only branch — only the schema will be present. ## Connect to a schema-only branch Connecting to a schema-only branch works the same way as connecting to any Neon branch. You'll connect via a compute associated with the branch. Follow these steps to connect using `psql` and a connection string obtained from the Neon Console. 1. In the Neon Console, select a project. 2. From the project **Dashboard**, click **Connect**, and select your schema-only branch, the database, and the role you want to connect with. 3. Copy the connection string. A connection string includes your role name, the compute hostname, and the database name. ```bash postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## What's different about schema-only branches? Unlike other branches, schema-only branches do not have a parent branch, as you can see below. Both the `main` branch of the project and the schema-only branch have no parent, indicated by the dash in the **Parent** column (`-`) on the **Branches** page in your Neon project. Schema-only branches are independent [root branches](https://neon.com/docs/reference/glossary#root-branch), just like the `production` branch in your Neon project. When you create a schema-only branch, you're creating a new **root branch**. ### Key points about schema-only branches - **No parent branch**: Schema-only branches are root branches. They do not have a parent branch. - **No shared history**: Data added to a schema-only branch is independent and adds to your storage. There is no shared history with a parent. - **Reset from parent is not supported**: With no parent branch, [reset from parent](https://neon.com/docs/manage/branches#reset-a-branch-from-parent) operations are not supported. - **Restore is supported, but...** performing a [restore](https://neon.com/docs/guides/branch-restore) operation on a schema-only branch copies both schema and data from the source branch. - **Branch protection is supported**: Like any other branch, you can enable [branch protection](https://neon.com/docs/guides/protected-branches) for schema-only branches. ## Schema-only branch allowances There are certain allowances associated with schema-only branches: - A schema-only branch is a [root branch](https://neon.com/docs/reference/glossary#root-branch), and only a certain number of root branches are permitted per Neon project, depending on your Neon plan. - The `main` root branch created with each Neon project counts toward the _root branch allowance per project_, as do certain [backup branches](https://neon.com/docs/reference/glossary#backup-branch) created by restore operations. - On the Free plan, all branches in a project share a total storage limit of 0.5 GB. Schema-only branches count toward this limit like any other branch. On paid plans, storage limits are higher, but each schema-only branch has a maximum storage allowance, as outlined in the following table. | Plan | Root branch allowance per project | Maximum storage allowance per schema-only branch | | :----- | :-------------------------------- | :----------------------------------------------- | | Free | 3 | 0.5 GB | | Launch | 5 | 3 GB | | Scale | 25 | 5 GB | Once you use up your root branch allowance, you will not be able to create additional schema-only branches. You will be required to remove existing root branches first. --- # Source: https://neon.com/llms/guides-branching-test-queries.txt # Branching — Testing queries > The document outlines how Neon users can create and manage database branches to test SQL queries in isolated environments, facilitating development and testing workflows without affecting the main database. ## Source - [Branching — Testing queries HTML](https://neon.com/docs/guides/branching-test-queries): The original HTML version of this documentation Complex queries that modify data or alter schemas have the potential to be destructive. It is advisable to test these types of queries before running them in production. On other database systems, testing potentially destructive queries can be time and resource intensive. For example, testing may involve setting up a separate database instance and replicating data. With Neon, you can instantly create a database branch with a full copy-on-write clone of your production data in just a few clicks. When you finish testing, you can remove the branch just as easily. **Tip** working with sensitive data?: Neon also supports schema-only branching. [Learn more](https://neon.com/docs/guides/branching-schema-only). This guide walks you through creating a branch of your production data, testing a potentially destructive query, and deleting the branch when you are finished. 1. [Create a test branch](https://neon.com/docs/guides/branching-test-queries#create-a-test-branch) 2. [Test your query](https://neon.com/docs/guides/branching-test-queries#test-your-query) 3. [Delete the test branch](https://neon.com/docs/guides/branching-test-queries#delete-the-test-branch) For the purpose of this guide, let's assume you have a database in Neon with the following table and data: ```sql CREATE TABLE Post ( id INT PRIMARY KEY, title VARCHAR(255), content TEXT, author_name VARCHAR(100), date_published DATE ); ``` ```sql INSERT INTO Post (id, title, content, author_name, date_published) VALUES (1, 'My first post', 'This is the content of the first post.', 'Alice', '2023-01-01'), (2, 'My second post', 'This is the content of the second post.', 'Alice', '2023-02-01'), (3, 'Old post by Bob', 'This is an old post by Bob.', 'Bob', '2020-01-01'), (4, 'Recent post by Bob', 'This is a recent post by Bob.', 'Bob', '2023-06-01'), (5, 'Another old post', 'This is another old post.', 'Alice', '2019-06-01'); ``` ## Create a test branch 1. In the Neon Console, select your project. 2. Select **Branches**. 3. Click **Create branch** to open the branch creation dialog. 4. Enter a name for the branch. This guide uses the name `my_test_branch`. 5. Select a parent branch. Select the branch defined as your default branch. 6. Under **Include data up to**, select the **Current point in time** option to create a branch with the latest available data from the parent branch (the default). 7. Click **Create new branch** to create your branch. You are directed to the **Branches** page where you are shown the details for your new branch. You can also create a test branch using the [Neon CLI](https://neon.com/docs/reference/cli-branches#create) or [Neon API](https://neon.com/docs/manage/branches#create-a-branch-with-the-api). Tab: CLI ```bash neon branches create --project-id --name my_test_branch ``` Tab: API ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects//branches \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "branch": { "name": "my_test_branch" } } ' | jq ``` ## Test your query Navigate to the **SQL Editor**, select the test branch, and run your query. For example, perhaps you are deleting blog posts from your database for a certain author published before a certain date, and you want to make sure the query only removes the intended records. ```sql DELETE FROM Post WHERE author_name = 'Alice' AND date_published < '2020-01-01'; ``` Next, inspect the data to ensure the intended records were deleted, while others remained unaffected. This query allows you to quickly see if the number of records matches your expectations: ```sql SELECT COUNT(*) FROM Post; ``` Before the `DELETE` query, there were 5 records. If the query ran correctly, this should now show 4. ## Delete the test branch When you finish testing your query, you can delete the test branch: 1. In the Neon Console, select a project. 2. Select **Branches**. 3. Select the test branch from the table. 4. From the **Actions** menu on the branch overview page, select **Delete**. You can also delete a branch using the [Neon CLI](https://neon.com/docs/reference/cli-branches#delete) or [Neon API](https://neon.com/docs/manage/branches#delete-a-branch-with-the-api). Tab: CLI ```bash neon branches delete my_test_branch ``` Tab: API ```bash curl --request DELETE \ --url https://console.neon.tech/api/v2/projects//branches/ \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" | jq ``` --- # Source: https://neon.com/llms/guides-bun.txt # Connect a Bun application to Neon > This document guides users on connecting a Bun application to a Neon database, detailing the necessary steps and configurations for seamless integration. ## Source - [Connect a Bun application to Neon HTML](https://neon.com/docs/guides/bun): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Bun application. Examples are provided for using [Bun's built-in SQL client](https://bun.sh/docs/api/sql) and the [@neondatabase/serverless](https://neon.com/docs/serverless/serverless-driver) driver. Use the client you prefer. **Note**: The same configuration steps can be used for [Hono](https://hono.dev/docs/getting-started/bun), [Elysia](https://elysiajs.com), and other Bun-based web frameworks. ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Bun project and add dependencies Create a Bun project and change to the newly created directory: ```shell mkdir bun-neon-example cd bun-neon-example bun init -y ``` Next, add project dependencies if you intend to use the Neon serverless driver. Otherwise, Bun's built-in `sql` client is readily available. Tab: Bun.sql ```shell # No dependencies needed for Bun's built-in SQL client ``` Tab: Neon serverless driver ```shell bun add @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env.local` file to your project directory and add your Neon connection details to it. Bun automatically loads variables from `.env`, `.env.local`, and other `.env.*` files. You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select Bun from the **Connection string** dropdown. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell POSTGRES_URL='postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require' ``` **Note**: `Bun.sql` uses `POSTGRES_URL` as the default environment variable for the Primary connection URL for Postgres **Important**: To ensure the security of your data, never expose your Neon credentials directly in your code or commit them to version control. ## Configure the Postgres client Add an `index.ts` file (or `index.js`) to your project directory and add the following code snippet to connect to your Neon database. Choose the configuration that matches your preferred client. Tab: Bun.sql ```typescript import { sql } from 'bun'; async function getPgVersion() { const result = await sql`SELECT version()`; console.log(result[0]); } getPgVersion(); ``` Tab: Neon serverless driver ```typescript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.POSTGRES_URL); async function getPgVersion() { const result = await sql`SELECT version()`; console.log(result[0]); } getPgVersion(); ``` ## Run index.ts Run `bun run index.ts` (or `bun index.js`) to view the result. ```shell $ bun run index.ts { version: "PostgreSQL 17.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit", } ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Bun and Neon](https://github.com/neondatabase/examples/tree/main/with-bun) ## References - [Bun SQL client](https://bun.sh/docs/api/sql) - [@neondatabase/serverless driver](https://neon.com/docs/serverless/serverless-driver) --- # Source: https://neon.com/llms/guides-cloudflare-hyperdrive.txt # Use Neon with Cloudflare Hyperdrive > The document guides users on integrating Neon with Cloudflare Hyperdrive, detailing the steps to configure and optimize the connection between Neon's serverless Postgres database and Cloudflare's edge network for enhanced performance and scalability. ## Source - [Use Neon with Cloudflare Hyperdrive HTML](https://neon.com/docs/guides/cloudflare-hyperdrive): The original HTML version of this documentation [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) is a serverless application that proxies queries to your database and accelerates them. It works by maintaining a globally distributed pool of database connections, and routing queries to the closest available connection. This is specifically useful for serverless applications that cannot maintain a persistent database connection and need to establish a new connection for each request. Hyperdrive can significantly reduce the latency of these queries for your application users. This guide demonstrates how to configure a Hyperdrive service to connect to your Neon Postgres database. It demonstrates how to implement a regular `Workers` application that connects to Neon directly and then replace that connection with a `Hyperdrive` connection to achieve performance improvements. ## Prerequisites To follow along with this guide, you require: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Cloudflare account. If you do not have one, sign up for [Cloudflare Workers](https://workers.cloudflare.com/) to get started. **NOTE**: You need to be on Cloudflare Workers' paid subscription plan to use Hyperdrive. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and deploy our Workers application. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Click the **New Project** button to create a new project. 3. From your project dashboard, navigate to the **SQL Editor** from the sidebar, and run the following SQL command to create a new table in your database: ```sql CREATE TABLE books_to_read ( id SERIAL PRIMARY KEY, title TEXT, author TEXT ); ``` Next, we insert some sample data into the `books_to_read` table, so we can query it later: ```sql INSERT INTO books_to_read (title, author) VALUES ('The Way of Kings', 'Brandon Sanderson'), ('The Name of the Wind', 'Patrick Rothfuss'), ('Coders at Work', 'Peter Seibel'), ('1984', 'George Orwell'); ``` ### Retrieve your Neon database connection string Log in to your **Project Dashboard** in the Neon Console and open the **Connect to your database** modal to find your database connection string. It should look similar to this: ```bash postgresql://neondb_owner:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Setting up your Cloudflare Workers application ### Create a new Worker project Run the following command in a terminal window to set up a new Cloudflare Workers project: ```bash npm create cloudflare@latest ``` This initiates an interactive CLI prompt to generate a new project. To follow along with this guide, you can use the following settings: ```bash ├ In which directory do you want to create your application? │ dir ./neon-hyperdrive-guide │ ├ What type of application do you want to create? │ type "Hello World" Worker │ ├ Do you want to use TypeScript? │ Yes typescript ``` When asked if you want to deploy your application, select `no`. We'll develop and test the application locally before deploying it to the Cloudflare Workers platform. The `create-cloudflare` CLI also installs the `Wrangler` tool to manage the full workflow of testing and managing your Worker applications. To emulate the Node environment in the Workers runtime, we need to add the following entry to the `wrangler.toml` file. ```toml #:schema node_modules/wrangler/config-schema.json name = "with-hyperdrive" main = "src/index.ts" compatibility_date = "2024-12-05" compatibility_flags = ["nodejs_compat"] ``` ### Implement the Worker script Navigate to the project directory and run the following command: Tab: node-postgres ```bash npm install pg npm install -D @types/pg ``` Tab: postgres.js ```bash npm install postgres ``` Now, you can update the `src/index.js` file in the project directory with the following code: Tab: node-postgres ```javascript import pkg from 'pg'; const { Client } = pkg; export default { async fetch(request, env, ctx) { const client = new Client({ connectionString: env.DATABASE_URL }); await client.connect(); const { rows } = await client.query('SELECT * FROM books_to_read'); return new Response(JSON.stringify(rows)); }, }; ``` Tab: postgres.js ```javascript import postgres from 'postgres'; export default { async fetch(request, env, ctx) { const sql = postgres(env.DATABASE_URL); const rows = await sql`SELECT * FROM books_to_read`; return new Response(JSON.stringify(rows)); }, }; ``` The `fetch` handler defined above gets called when the worker receives an HTTP request. It will query the Neon database to fetch the full list of books in our to-read list. ### Test the worker application locally First, you need to configure the `DATABASE_URL` environment variable to point to the Neon database. You can do this by creating a `.dev.vars` file at the root of the project directory with the following content: ```text DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` Now, to test the worker application locally, you can use the `wrangler` CLI which comes with the Cloudflare project setup. ```bash npx wrangler dev ``` This command starts a local server and simulates the Cloudflare Workers environment. You can visit the printed URL in your browser to test the worker application. It should return a JSON response with the list of books from the `books_to_read` table. ## Setting up Cloudflare Hyperdrive With our Workers application able to query the Neon database, we will now set up Cloudflare Hyperdrive to connect to Neon and accelerate the database queries. ### Create a new Hyperdrive service You can use the `Wrangler` CLI to create a new Hyperdrive service, using your Neon database connection string from earlier: ```bash npx wrangler hyperdrive create neon-guide-drive --connection-string=$NEON_DATABASE_CONNECTION_STRING ``` This command creates a new Hyperdrive service named `neon-guide-drive` and outputs its configuration details. Copy the `id` field from the output, which we will use next. ### Bind the Worker project to Hyperdrive Cloudflare workers uses `Bindings` to interact with other resources on the Cloudflare platform. We will update the `wrangler.toml` file in the project directory to bind our Worker project to the Hyperdrive service. Add the following lines to the `wrangler.toml` file. This lets us access the Hyperdrive service from our Worker application using the `HYPERDRIVE` binding. ```toml [[hyperdrive]] binding = "HYPERDRIVE" id = $id-from-previous-step ``` ### Update the Worker script to use Hyperdrive Now, you can update the `src/index.js` file in the project directory to query the Neon database, through the Hyperdrive service. Tab: node-postgres ```javascript import pkg from 'pg'; const { Client } = pkg; export default { async fetch(request, env, ctx) { const client = new Client({ connectionString: env.HYPERDRIVE.connectionString }); await client.connect(); const { rows } = await client.query('SELECT * FROM books_to_read'); return new Response(JSON.stringify(rows)); }, }; ``` Tab: postgres.js ```javascript import postgres from 'postgres'; export default { async fetch(request, env, ctx) { const sql = postgres(env.HYPERDRIVE.connectionString); const rows = await sql`SELECT * FROM books_to_read`; return new Response(JSON.stringify(rows)); }, }; ``` ### Deploy the updated Worker Now that we have updated the Worker script to use the Hyperdrive service, we can deploy the updated Worker to the Cloudflare Workers platform: ```bash npx wrangler deploy ``` This command uploads the updated Worker script to the Cloudflare Workers platform and makes it available at a public URL. You can visit the URL in your browser to test that the application works. ## Removing the example application and Neon project To delete your Worker project, you can use the Cloudflare dashboard or run `wrangler delete` from your project directory, specifying your project name. Refer to the [Wrangler documentation](https://developers.cloudflare.com/workers/wrangler/commands/#delete-3) for more details. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Example application - [Neon + Cloudflare Hyperdrive](https://github.com/neondatabase/examples/tree/main/with-hyperdrive): Demonstrates using Cloudflare's Hyperdrive to access your Neon database from Cloudflare Workers ## Why sslmode=disable appears in Hyperdrive URLs If you're using [Postgres.js](https://github.com/porsager/postgres) (or another library that requires SSL) with Neon and Hyperdrive, you might see an error like: ```text PostgresError: connection is insecure (try using sslmode=require) ``` This happens because the local connection string generated by Hyperdrive includes `sslmode=disable`. While this may look insecure, it's by design and your database connection is still secure: - Hyperdrive terminates SSL inside Cloudflare's infrastructure. - Your Worker connects to Hyperdrive over an internal `.hyperdrive.local` address—no SSL needed. - Hyperdrive then connects to your Neon database using the original connection string with `sslmode=require`, maintaining full SSL encryption upstream. **Connection path:** ```text Worker → Hyperdrive (.hyperdrive.local, no SSL) Hyperdrive → Neon Database (SSL enabled) ``` This setup works in production. But for local development, libraries like Postgres.js may still reject the connection due to the local `sslmode=disable`. ✅ **To avoid this issue locally**, use `wrangler dev --remote`. This runs your Worker in Cloudflare's infrastructure, where the connection string works as expected. ## Resources - [Cloudflare Workers](https://workers.cloudflare.com/) - [Cloudflare Hyperdrive](https://developers.cloudflare.com/hyperdrive/) - [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-cloudflare-pages.txt # Use Neon with Cloudflare Pages > The document outlines the steps for integrating Neon with Cloudflare Pages, detailing the configuration process to connect a Neon database to a Cloudflare Pages application. ## Source - [Use Neon with Cloudflare Pages HTML](https://neon.com/docs/guides/cloudflare-pages): The original HTML version of this documentation `Cloudflare Pages` is a modern web application hosting platform that allows you to build, deploy, and scale your web applications. While it is typically used to host static websites, you can also use it to host interactive web applications by leveraging `functions` to run server-side code. Internally, Cloudflare functions are powered by `Cloudflare Workers`, a serverless platform that allows you to run JavaScript code on Cloudflare's edge network. This guide demonstrates how to connect to a Neon Postgres database from your Cloudflare Pages application. We'll create a simple web application using `React` that tracks our reading list using the database and provides a form to add new books to it. We'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to connect to the database and make queries. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Cloudflare account. If you do not have one, sign up for [Cloudflare Pages](https://pages.cloudflare.com/) to get started. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and deploy our `Pages` application. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Click the **New Project** button to create a new project. 3. From your project dashboard, navigate to the **SQL Editor** from the sidebar, and run the following SQL command to create a new table in your database: ```sql CREATE TABLE books_to_read ( id SERIAL PRIMARY KEY, title TEXT, author TEXT ); ``` Next, we insert some sample data into the `books_to_read` table, so we can query it later: ```sql INSERT INTO books_to_read (title, author) VALUES ('The Way of Kings', 'Brandon Sanderson'), ('The Name of the Wind', 'Patrick Rothfuss'), ('Coders at Work', 'Peter Seibel'), ('1984', 'George Orwell'); ``` ### Retrieve your Neon database connection string Navigate to your **Project Dashboard** in the Neon Console and click **Connect** to open the **Connect to your database** modal to find your database connection string. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Setting up your Cloudflare Pages project ### Create a new project We will create a simple React application using the Vite bundler framework. Run the following command in a terminal window to set up a new Vite project: ```bash npm create vite@latest ``` This initiates an interactive CLI prompt to generate a new project. To follow along with this guide, you can use the following settings: ```bash ✔ Project name: … my-neon-page ✔ Select a framework: › React ✔ Select a variant: › JavaScript Scaffolding project in /Users/ishananand/repos/javascript/my-neon-page... Done. Now run: cd my-neon-page npm install npm run dev ``` We set up a template React configured to be built using Vite. ### Implement the application frontend Navigate to the `my-neon-page` directory and open the `src/App.jsx` file. Replace the contents of this file with the following code: ```jsx // src/App.jsx import React, { useState, useEffect } from 'react'; function App() { const [books, setBooks] = useState([]); const [bookName, setBookName] = useState(''); const [authorName, setAuthorName] = useState(''); // Function to fetch books const fetchBooks = async () => { try { const response = await fetch('/books'); const data = await response.json(); setBooks(data); } catch (error) { console.error('Error fetching books:', error); } }; useEffect(() => { fetchBooks(); }, []); const handleSubmit = async (event) => { event.preventDefault(); try { const response = await fetch('/books/add', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ title: bookName, author: authorName }), }); const data = await response.json(); if (data.success) { console.log('Success:', data); setBooks([...books, { title: bookName, author: authorName }]); } else { console.error('Error adding book:', data.error); } } catch (error) { console.error('Error:', error); } // Reset form fields setBookName(''); setAuthorName(''); }; return (

Book List

    {books.map((book, index) => (
  • {book.title} by {book.author}
  • ))}

Add a Book

); } export default App; ``` The `App` component fetches the list of books from the server and displays them. It also provides a form to add new books to the list. `Cloudflare` Pages allows us to define the API endpoints as serverless functions, which we'll implement next. ### Implement the serverless functions We'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to connect to the Neon database, so we first need to install it as a dependency: ```bash npm install @neondatabase/serverless ``` Next, we'll create two serverless functions for the application. In a `Cloudflare Pages` project, these must be defined in the `functions` directory at the root of the project. For further details, refer to the [Cloudflare Pages - Functions documentation](https://developers.cloudflare.com/pages/functions/). #### Function to fetch list of books from the database Create a new file named `functions/books/index.js` in the project directory with the following content: ```js import { Client } from '@neondatabase/serverless'; export async function onRequestGet(context) { const client = new Client(context.env.DATABASE_URL); await client.connect(); // Logic to fetch books from your database const { rows } = await client.query('SELECT * FROM books_to_read;'); return new Response(JSON.stringify(rows)); } ``` This function fetches the list of books from the `books_to_read` table in the database and returns it as a JSON response. #### Function to add a new book to the database Create another file named `functions/books/add.js` in the project directory with the following content: ```js import { Client } from '@neondatabase/serverless'; export async function onRequestPost(context) { const client = new Client(context.env.DATABASE_URL); await client.connect(); // Extract the book details from the request body const book = await context.request.json(); // Logic to insert a new book into your database const resp = await client.query('INSERT INTO books_to_read (title, author) VALUES ($1, $2); ', [ book.title, book.author, ]); // Check if insert query was successful if (resp.rowCount === 1) { return new Response(JSON.stringify({ success: true, error: null, data: book }), { headers: { 'Content-Type': 'application/json' }, }); } else { return new Response( JSON.stringify({ success: false, error: 'Failed to insert book', data: book, }), { headers: { 'Content-Type': 'application/json' }, status: 500, } ); } } ``` This function extracts the book details from the request body and inserts it into the `books_to_read` table in the database. It returns a JSON response indicating the success or failure of the operation. ### Test the application locally Our application is now ready to be tested locally. However, we first need to configure the `DATABASE_URL` environment variable to point to our Neon database. We can do this by creating a `.dev.vars` file at the root of the project directory with the following content: ```text DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` Now, to test the `Pages` application locally, we can use the `wrangler` CLI tool used to manage Cloudflare projects. We can use it using the `npx` command as: ```bash npx wrangler pages dev -- npm run dev ``` This command starts a local server simulating the Cloudflare environment. The function endpoints are run by the Wrangler tool while requests to the root URL are proxied to the Vite development server. ```bash ❯ npx wrangler pages dev -- npm run dev Running npm run dev... . . . . ------------------- Using vars defined in .dev.vars Your worker has access to the following bindings: - Vars: - DATABASE_URL: "(hidden)" ⎔ Starting local server... [wrangler:inf] Ready on http://localhost:8788 ``` Visit the printed localhost URL in your browser to interact with the application. You should see the list of books fetched from the database and a form to add new books. ## Deploying your application with Cloudflare Pages ### Authenticate Wrangler with your Cloudflare account Run the following command to link the Wrangler tool to your Cloudflare account: ```bash npx wrangler login ``` This command will open a browser window and prompt you to log into your Cloudflare account. After logging in and approving the access request for `Wrangler`, you can close the browser window and return to your terminal. ### Publish your Pages application and verify the deployment Now, you can deploy your application to `Cloudflare Pages` by running the following command: ```bash npm run build npx wrangler pages deploy dist --project-name ``` Give a unique name to your `Cloudflare Pages` project above. The Wrangler CLI will output the URL of your application hosted on the Cloudflare platform. Visit this URL in your browser to interact with it. ```bash ✨ Compiled Worker successfully 🌍 Uploading... (4/4) ✨ Success! Uploaded 0 files (4 already uploaded) (0.72 sec) ✨ Uploading Functions bundle ✨ Deployment complete! Take a peek over at https://21ea2a57.my-neon-page.pages.dev ``` ### Add your Neon connection string as an environment variable The Cloudflare production deployment doesn't have access to the `DATABASE_URL` environment variable yet. Hence, we need to navigate to the Cloudflare dashboard and add it manually. Navigate to the dashboard and select the `Settings` section in your project. Go to the **Environment Variables** tab and add a new environment variable named `DATABASE_URL` with the value of your Neon database connection string. To make sure the environment variable is available to the serverless functions, go back to the terminal and redeploy the project using the `wrangler` CLI: ```bash npx wrangler pages deploy dist --project-name ``` Now, visit the URL of your `Cloudflare Pages` application to interact with it. You should see the list of books fetched from the Neon database and a form to add new books. ## Removing the example application and Neon project To delete your `Cloudflare Pages` application, you can use the Cloudflare dashboard. Refer to the [Pages documentation](https://developers.cloudflare.com/pages) for more details. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon with Cloudflare Pages](https://github.com/neondatabase/examples/tree/main/deploy-with-cloudflare-pages): Connect a Neon Postgres database to your Cloudflare Pages web application ## Resources - [Cloudflare Pages](https://pages.cloudflare.com/) - [Cloudflare Pages - Documentation](https://developers.cloudflare.com/pages/) - [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-cloudflare-r2.txt # File storage with Cloudflare R2 > The document outlines the process for integrating Neon with Cloudflare R2 for file storage, detailing configuration steps and necessary settings to enable seamless data management and storage solutions within the Neon environment. ## Source - [File storage with Cloudflare R2 HTML](https://neon.com/docs/guides/cloudflare-r2): The original HTML version of this documentation [Cloudflare R2](https://www.cloudflare.com/en-in/developer-platform/products/r2/) is S3-compatible object storage offering zero egress fees, designed for storing and serving large amounts of unstructured data like images, videos, and documents globally. This guide demonstrates how to integrate Cloudflare R2 with Neon by storing file metadata in your Neon database, while using R2 for file storage. ## Setup steps ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create a Cloudflare account and R2 bucket 1. Sign up for or log in to your [Cloudflare account](https://dash.cloudflare.com/sign-up/r2). 2. Navigate to **R2** in the Cloudflare dashboard sidebar. 3. Click **Create bucket**, provide a unique bucket name (e.g., `my-neon-app-files`), and click **Create bucket**. 4. Generate R2 API credentials (**Access Key ID** and **Secret Access Key**) by following [Create an R2 API Token](https://developers.cloudflare.com/r2/api/tokens/). Select **Object Read & Write** permissions. Copy these credentials securely. 5. Obtain your Cloudflare **Account ID** by following [Find your Account ID](https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/#find-your-account-id). 6. For this example, enable public access to your bucket URL by following [Allow public access to your bucket](https://developers.cloudflare.com/r2/buckets/public-buckets/#enable-managed-public-access). Note your bucket's public URL (e.g., `https://pub-xxxxxxxx.r2.dev`). **Note** Public access: Public access makes all objects readable via URL; consider private buckets and signed URLs for sensitive data in production. ## Configure CORS for client-side uploads If your application involves uploading files **directly from a web browser** using the generated presigned URLs, you must configure Cross-Origin Resource Sharing (CORS) on your R2 bucket. CORS rules tell R2 which web domains are allowed to make requests (like `PUT` requests for uploads) to your bucket. Without proper CORS rules, browser security restrictions will block these direct uploads. Follow Cloudflare's guide to [Configure CORS](https://developers.cloudflare.com/r2/buckets/cors/) for your bucket. You can add rules via R2 Bucket settings in the Cloudflare dashboard. Here's an example CORS configuration allowing `PUT` uploads and `GET` requests from your deployed frontend application and your local development environment: ```json [ { "AllowedOrigins": [ "https://your-production-app.com", // Replace with your actual frontend domain "http://localhost:3000" // For local development ], "AllowedMethods": ["PUT", "GET"] } ] ``` ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the objects uploaded to R2. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Here is an example SQL statement to create a simple table including the object key, URL, user ID, and timestamp: ```sql CREATE TABLE IF NOT EXISTS r2_files ( id SERIAL PRIMARY KEY, object_key TEXT NOT NULL UNIQUE, -- Key (path/filename) in R2 file_url TEXT NOT NULL, -- Publicly accessible URL user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. You can add other relevant columns (file size, content type, etc.) depending on your application needs. **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `r2_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the objects within the R2 bucket itself is managed via R2 permissions, API tokens, and presigned URL settings if used. ## Upload files to R2 and store metadata in Neon A common pattern with S3-compatible storage like R2 involves **presigned upload URLs**. Your backend generates a temporary, secure URL that the client uses to upload the file directly to R2. Afterwards, your backend saves the file's metadata to Neon. This requires two backend endpoints: 1. `/presign-upload`: Generates the temporary presigned URL for the client to upload a file directly to R2. 2. `/save-metadata`: Records the metadata in Neon after the client confirms a successful upload to R2. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, [`@aws-sdk/client-s3`](https://www.npmjs.com/package/@aws-sdk/client-s3) and [`@aws-sdk/s3-request-presigner`](https://www.npmjs.com/package/@aws-sdk/s3-request-presigner) for R2 interaction, and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file: ```env # R2 Credentials & Config R2_ACCOUNT_ID=your_cloudflare_account_id R2_ACCESS_KEY_ID=your_r2_api_token_access_key_id R2_SECRET_ACCESS_KEY=your_r2_api_token_secret_access_key R2_BUCKET_NAME=your_r2_bucket_name # my-neon-app-files if following the example R2_PUBLIC_BASE_URL=https://your-bucket-public-url.r2.dev # Your R2 bucket public URL # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; import { randomUUID } from 'crypto'; const R2_ENDPOINT = `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`; const R2_BUCKET = process.env.R2_BUCKET_NAME; const R2_PUBLIC_BASE_URL = process.env.R2_PUBLIC_BASE_URL; // Ensure no trailing '/' const s3 = new S3Client({ region: 'auto', endpoint: R2_ENDPOINT, credentials: { accessKeyId: process.env.R2_ACCESS_KEY_ID, secretAccessKey: process.env.R2_SECRET_ACCESS_KEY, }, }); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. const authMiddleware = async (c, next) => { c.set('userId', 'user_123'); // Example: Get user ID after validation await next(); }; // 1. Generate Presigned URL for Upload app.post('/presign-upload', authMiddleware, async (c) => { try { const { fileName, contentType } = await c.req.json(); if (!fileName || !contentType) throw new Error('fileName and contentType required'); const objectKey = `${randomUUID()}-${fileName}`; const publicFileUrl = R2_PUBLIC_BASE_URL ? `${R2_PUBLIC_BASE_URL}/${objectKey}` : null; const command = new PutObjectCommand({ Bucket: R2_BUCKET, Key: objectKey, ContentType: contentType, }); const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); return c.json({ success: true, presignedUrl, objectKey, publicFileUrl }); } catch (error) { console.error('Presign Error:', error.message); return c.json({ success: false, error: 'Failed to prepare upload' }, 500); } }); // 2. Save Metadata after Client Upload Confirmation app.post('/save-metadata', authMiddleware, async (c) => { try { const { objectKey, publicFileUrl } = await c.req.json(); const userId = c.get('userId'); if (!objectKey) throw new Error('objectKey required'); const finalFileUrl = publicFileUrl || (R2_PUBLIC_BASE_URL ? `${R2_PUBLIC_BASE_URL}/${objectKey}` : 'URL not available'); await sql` INSERT INTO r2_files (object_key, file_url, user_id) VALUES (${objectKey}, ${finalFileUrl}, ${userId}) `; console.log(`Metadata saved for R2 object: ${objectKey}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes the Neon database client (`sql`), the Hono web framework (`app`), and the AWS S3 client (`s3`) configured for R2 using environment variables. 2. **Authentication:** A placeholder `authMiddleware` is included. **Crucially**, this needs to be replaced with real authentication logic. It currently just sets a static `userId` for demonstration. 3. **Upload endpoints**: - **`/presign-upload`:** Generates a temporary secure URL (`presignedUrl`) that allows uploading a file with a specific `objectKey` and `contentType` directly to R2 using `@aws-sdk/client-s3`. It returns the URL, key, and public URL. - **`/save-metadata`:** Called by the client _after_ it successfully uploads the file to R2. It saves the `objectKey`, the final `file_url`, and the `userId` into the `r2_files` table in Neon using `@neondatabase/serverless`. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) (AWS SDK for Python), and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask boto3 psycopg2-binary python-dotenv ``` Create a `.env` file: ```env # R2 Credentials & Config R2_ACCOUNT_ID=your_cloudflare_account_id R2_ACCESS_KEY_ID=your_r2_api_token_access_key_id R2_SECRET_ACCESS_KEY=your_r2_api_token_secret_access_key R2_BUCKET_NAME=your_r2_bucket_name # my-neon-app-files if following the example R2_PUBLIC_BASE_URL=https://your-bucket-public-url.r2.dev # Your R2 bucket public URL # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import uuid import boto3 import psycopg2 from botocore.exceptions import ClientError from dotenv import load_dotenv from flask import Flask, jsonify, request load_dotenv() R2_ACCOUNT_ID = os.getenv("R2_ACCOUNT_ID") R2_BUCKET_NAME = os.getenv("R2_BUCKET_NAME") R2_PUBLIC_BASE_URL = os.getenv("R2_PUBLIC_BASE_URL") DATABASE_URL = os.getenv("DATABASE_URL") R2_ENDPOINT_URL = f"https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.com" s3_client = boto3.client( service_name='s3', endpoint_url=R2_ENDPOINT_URL, aws_access_key_id=os.getenv("R2_ACCESS_KEY_ID"), aws_secret_access_key=os.getenv("R2_SECRET_ACCESS_KEY"), region_name='auto' ) app = Flask(__name__) # Use a global PostgreSQL connection instead of creating a new one for each request in production def get_db_connection(): return psycopg2.connect(DATABASE_URL) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate Presigned URL for Upload @app.route("/presign-upload", methods=["POST"]) def presign_upload_route(): try: user_id = get_authenticated_user_id(request) if not user_id: return jsonify({"success": False, "error": "Unauthorized"}), 401 data = request.get_json() file_name = data.get('fileName') content_type = data.get('contentType') if not file_name or not content_type: raise ValueError("fileName and contentType required") object_key = f"{uuid.uuid4()}-{file_name}" public_file_url = f"{R2_PUBLIC_BASE_URL}/{object_key}" if R2_PUBLIC_BASE_URL else None presigned_url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': R2_BUCKET_NAME, 'Key': object_key, 'ContentType': content_type}, ExpiresIn=300 ) return jsonify({ "success": True, "presignedUrl": presigned_url, "objectKey": object_key, "publicFileUrl": public_file_url }), 200 except (ClientError, ValueError) as e: print(f"Presign Error: {e}") return jsonify({"success": False, "error": f"Failed to prepare upload: {e}"}), 500 except Exception as e: print(f"Unexpected Presign Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 # 2. Save Metadata after Client Upload Confirmation @app.route("/save-metadata", methods=["POST"]) def save_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) data = request.get_json() object_key = data.get('objectKey') public_file_url = data.get('publicFileUrl') if not object_key: raise ValueError("objectKey required") final_file_url = public_file_url or (f"{R2_PUBLIC_BASE_URL}/{object_key}" if R2_PUBLIC_BASE_URL else 'URL not available') conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO r2_files (object_key, file_url, user_id) VALUES (%s, %s, %s) """, (object_key, final_file_url, user_id), ) conn.commit() print(f"Metadata saved for R2 object: {object_key}") return jsonify({"success": True}), 201 except (psycopg2.Error, ValueError) as e: print(f"Metadata Save Error: {e}") return jsonify({"success": False, "error": "Failed to save metadata"}), 500 except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": app.run(port=3000, debug=True) ``` **Explanation** 1. **Setup:** Initializes the Flask web framework, the R2 client (`s3_client` using `boto3`), and the PostgreSQL client (`psycopg2`) using environment variables. 2. **Authentication:** A placeholder `get_authenticated_user_id` function is included. **Replace this with real authentication logic.** 3. **Upload endpoints**: - **`/presign-upload`:** Generates a temporary secure URL (`presignedUrl`) that allows uploading a file with a specific `objectKey` and `contentType` directly to R2 using `boto3`. It returns the URL, key, and public URL. - **`/save-metadata`:** Called by the client _after_ it successfully uploads the file to R2. It saves the `objectKey`, the final `file_url`, and the `userId` into the `r2_files` table in Neon using `psycopg2`. 4. In production, you should use a global PostgreSQL connection instead of creating a new one for each request. This is important for performance and resource management. ## Testing the upload workflow Testing the presigned URL flow involves multiple steps: 1. **Get presigned URL:** Send a `POST` request to your `/presign-upload` endpoint with a JSON body containing `fileName` and `contentType`. ```bash curl -X POST http://localhost:3000/presign-upload \ -H "Content-Type: application/json" \ -d '{"fileName": "test-image.png", "contentType": "image/png"}' ``` You should receive a JSON response with a `presignedUrl`, `objectKey`, and `publicFileUrl`: ```json { "success": true, "presignedUrl": "https://.r2.cloudflarestorage.com//?X-Amz-Algorithm=...", "objectKey": "", "publicFileUrl": "https://pub-.r2.dev/" } ``` Note the `presignedUrl`, `objectKey`, and `publicFileUrl` from the response. You will use these in the next steps. 2. **Upload file to R2:** Use the received `presignedUrl` to upload the actual file using an HTTP `PUT` request. ```bash curl -X PUT "" \ --upload-file /path/to/your/test-image.png \ -H "Content-Type: image/png" ``` A successful upload typically returns HTTP `200 OK` with no body. 3. **Save metadata:** Send a `POST` request to your `/save-metadata` endpoint with the `objectKey` and `publicFileUrl` obtained in step 1. ```bash curl -X POST http://localhost:3000/save-metadata \ -H "Content-Type: application/json" \ -d '{"objectKey": "", "publicFileUrl": ""}' ``` You should receive a JSON response indicating success: ```json { "success": true } ``` **Expected outcome:** - The file is uploaded to your R2 bucket. You can verify this in the Cloudflare dashboard or by accessing the `publicFileUrl` if your bucket is public. - A new row appears in your `r2_files` table in Neon containing the `object_key` and `file_url`. You can now integrate API calls to these endpoints from various parts of your application (e.g., web clients using JavaScript's `fetch` API, mobile apps, backend services) to handle file uploads. ## Accessing file metadata and files Storing metadata in Neon allows your application to easily retrieve references to the files hosted on R2. Query the `r2_files` table from your application's backend when needed. **Example SQL query:** Retrieve files for user 'user_123': ```sql SELECT id, -- Your database primary key object_key, -- Key (path/filename) in the R2 bucket file_url, -- Publicly accessible URL user_id, -- User associated with the file upload_timestamp FROM r2_files WHERE user_id = 'user_123'; -- Use actual authenticated user ID ``` **Using the data:** - The query returns rows containing the file metadata stored in Neon. - The `file_url` column contains the direct link to access the file. - Use this `file_url` in your application (e.g., `` tags, API responses, download links) wherever you need to display or provide access to the file. **Note** Private buckets: For private R2 buckets, store only the `object_key` and generate presigned *read* URLs on demand using a similar backend process. This pattern separates file storage and delivery (handled by R2) from structured metadata management (handled by Neon). ## Resources - [Cloudflare R2 documentation](https://developers.cloudflare.com/r2/) - [Cloudflare presigned URLs](https://developers.cloudflare.com/r2/api/s3/presigned-urls/) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-cloudflare-workers.txt # Use Neon with Cloudflare Workers > The document explains how to integrate Neon with Cloudflare Workers, detailing the steps to set up a serverless PostgreSQL database connection using Neon's platform within Cloudflare's serverless environment. ## Source - [Use Neon with Cloudflare Workers HTML](https://neon.com/docs/guides/cloudflare-workers): The original HTML version of this documentation [Cloudflare Workers](https://workers.cloudflare.com/) is a serverless platform allowing you to deploy your applications globally across Cloudflare's network. It supports running JavaScript, TypeScript, and WebAssembly, making it a great choice for high-performance, low-latency web applications. This guide demonstrates how to connect to a Neon Postgres database from your Cloudflare Workers application. We'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to connect to the database and make queries. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Cloudflare account. If you do not have one, sign up for [Cloudflare Workers](https://workers.cloudflare.com/) to get started. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and deploy the Workers application. ## Setting up your Neon database ### Initialize a new project Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 1. Click the **New Project** button to create a new project. 2. From the Neon **Dashboard**, navigate to the **SQL Editor** from the sidebar, and run the following SQL command to create a new table in your database: ```sql CREATE TABLE books_to_read ( id SERIAL PRIMARY KEY, title TEXT, author TEXT ); ``` Next, insert some sample data into the `books_to_read` table so that you can query it later: ```sql INSERT INTO books_to_read (title, author) VALUES ('The Way of Kings', 'Brandon Sanderson'), ('The Name of the Wind', 'Patrick Rothfuss'), ('Coders at Work', 'Peter Seibel'), ('1984', 'George Orwell'); ``` ### Retrieve your Neon database connection string Navigate to your **Project Dashboard** in the Neon Console and click **Connect** to open the **Connect to your database** modal to find your database connection string. Enable the **Connection pooling** toggle to add the `-pooler` option to your connection string. A pooled connection is recommended for serverless environments. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Your pooled connection string should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Setting up your Cloudflare Workers project ### Create a new Worker project Run the following command in a terminal window to set up a new Cloudflare Workers project: ```bash npm create cloudflare@latest ``` This initiates an interactive CLI prompt to generate a new project. To follow along with this guide, you can use the following settings: ```bash ├ In which directory do you want to create your application? │ dir ./my-neon-worker │ ├ What type of application do you want to create? │ type "Hello World" Worker │ ├ Do you want to use TypeScript? │ no typescript ``` When asked if you want to deploy your application, select `no`. We'll develop and test the application locally before deploying it to Cloudflare Workers platform. The `create-cloudflare` CLI installs the `Wrangler` tool to manage the full workflow of testing and managing your Worker applications. ### Implement the Worker script We'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to connect to the Neon database, so you need to install it as a dependency: ```bash npm install @neondatabase/serverless ``` Now, you can update the `src/index.js` file in the project directory with the following code: ```js import { Client } from '@neondatabase/serverless'; export default { async fetch(request, env, ctx) { const client = new Client(env.DATABASE_URL); await client.connect(); const { rows } = await client.query('SELECT * FROM books_to_read;'); return new Response(JSON.stringify(rows)); }, }; ``` The `fetch` handler defined above gets called when the worker receives an HTTP request. It will query the Neon database to fetch the full list of books in our to-read list. ### Test the worker application locally You first need to configure the `DATABASE_URL` environment variable to point to our Neon database. You can do this by creating a `.dev.vars` file at the root of the project directory with the following content: ```text DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` Now, to test the worker application locally, you can use the `wrangler` CLI which comes with the Cloudflare project setup. ```bash npx wrangler dev ``` This command starts a local server and simulates the Cloudflare Workers environment. ```bash ❯ npx wrangler dev ⛅️ wrangler 3.28.1 ------------------- Using vars defined in .dev.vars Your worker has access to the following bindings: - Vars: - DATABASE_URL: "(hidden)" ⎔ Starting local server... [wrangler:inf] Ready on http://localhost:8787 ``` You can visit `http://localhost:8787` in your browser to test the worker application. It should return a JSON response with the list of books from the `books_to_read` table. ``` [{"id":1,"title":"The Way of Kings","author":"Brandon Sanderson"},{"id":2,"title":"The Name of the Wind","author":"Patrick Rothfuss"},{"id":3,"title":"Coders at Work","author":"Peter Seibel"},{"id":4,"title":"1984","author":"George Orwell"}] ``` ## Deploying your application with Cloudflare Workers ### Authenticate Wrangler with your Cloudflare account Run the following command to link the Wrangler tool to your Cloudflare account: ```bash npx wrangler login ``` This command will open a browser window and prompt you to log into your Cloudflare account. After logging in and approving the access request for `Wrangler`, you can close the browser window and return to your terminal. ### Add your Neon connection string as a secret Use Wrangler to add your Neon database connection string as a secret to your Worker: ```bash npx wrangler secret put DATABASE_URL ``` When prompted, paste your Neon connection string. ### Publish your Worker application and verify the deployment Now, you can deploy your application to Cloudflare Workers by running the following command: ```bash npx wrangler deploy ``` The Wrangler CLI will output the URL of your Worker hosted on the Cloudflare platform. Visit this URL in your browser or use `curl` to verify the deployment works as expected. ```text ❯ npx wrangler deploy ⛅️ wrangler 3.28.1 ------------------- Total Upload: 189.98 KiB / gzip: 49.94 KiB Uploaded my-neon-worker (4.03 sec) Published my-neon-worker (5.99 sec) https://my-neon-worker.anandishan2.workers.dev Current Deployment ID: de8841dd-46e4-436d-b2c4-569e91f54c72 ``` ## Removing the example application and Neon project To delete your Worker, you can use the Cloudflare dashboard or run `wrangler delete` from your project directory, specifying your project name. Refer to the [Wrangler documentation](https://developers.cloudflare.com/workers/wrangler/commands/#delete-3) for more details. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon with Cloudflare Workers](https://github.com/neondatabase/examples/tree/main/deploy-with-cloudflare-workers): Connect a Neon Postgres database to your Cloudflare Workers application ## Resources - [Cloudflare Workers](https://workers.cloudflare.com/) - [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-cloudinary.txt # Media storage with Cloudinary > The document outlines how to integrate Cloudinary for media storage within Neon, detailing configuration steps and API usage to manage and deliver media assets efficiently. ## Source - [Media storage with Cloudinary HTML](https://neon.com/docs/guides/cloudinary): The original HTML version of this documentation [Cloudinary](https://cloudinary.com/) is a cloud-based platform for image and video management, offering upload, storage, real-time manipulation, optimization, and delivery via CDN. This guide demonstrates how to integrate Cloudinary with Neon. You'll learn how to securely upload files directly from the client-side to Cloudinary using signatures generated by your backend, and then store the resulting asset metadata (like the Cloudinary Public ID and secure URL) in your Neon database. ## Setup steps ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create a Cloudinary account and get credentials 1. Sign up for a free or paid account at [Cloudinary.com](https://cloudinary.com/users/register/free). 2. Once logged in, navigate to your **Account settings**. 3. Find your **Product Environment Credentials** which include: - **Cloud Name** - **API Key** - **API Secret** Create a new **API Key** if you do not have one. This key is used to authenticate your application with Cloudinary. ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the assets uploaded to Cloudinary. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Create a table to store relevant details: ```sql CREATE TABLE IF NOT EXISTS cloudinary_files ( id SERIAL PRIMARY KEY, public_id TEXT NOT NULL UNIQUE, -- Cloudinary's unique identifier for the asset media_url TEXT NOT NULL, -- Media URL for the asset on Cloudinary's CDN resource_type TEXT NOT NULL, -- Type of asset (e.g., 'image', 'video', 'raw') user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. You can customize this table by adding other useful columns returned by Cloudinary (e.g., `version`, `format`, `width`, `height`, `tags`). **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), apply appropriate policies to the `cloudinary_files` table to control access to the metadata stored in Neon based on your rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the assets themselves is managed within Cloudinary (e.g., via asset types, delivery types). By default, uploaded assets are typically accessible via their CDN URL. ## Upload files to Cloudinary and store metadata in Neon The recommended secure approach for client-side uploads to Cloudinary involves **signed uploads**. Your backend generates a unique signature using your API Secret and specific upload parameters (like timestamp). The client uses this signature, along with your API Key and the parameters, to authenticate the direct upload request to Cloudinary. After a successful upload, the client sends the returned asset metadata back to your backend to save in Neon. This requires two backend endpoints: 1. `/generate-signature`: Generates a signature, timestamp, and provides the API key for the client upload. 2. `/save-metadata`: Receives asset metadata from the client after a successful Cloudinary upload and saves it to the Neon database. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, the official [`cloudinary`](https://www.npmjs.com/package/cloudinary) Node.js SDK for signature generation, and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install cloudinary @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file with your credentials: ```env # Cloudinary Credentials CLOUDINARY_CLOUD_NAME=your_cloudinary_cloud_name CLOUDINARY_API_KEY=your_cloudinary_api_key CLOUDINARY_API_SECRET=your_cloudinary_api_secret # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { v2 as cloudinary } from 'cloudinary'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; cloudinary.config({ cloud_name: process.env.CLOUDINARY_CLOUD_NAME, api_key: process.env.CLOUDINARY_API_KEY, api_secret: process.env.CLOUDINARY_API_SECRET, secure: true, }); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic const authMiddleware = async (c, next) => { // Example: Validate JWT, session, etc. and set user ID c.set('userId', 'user_123'); // Static ID for demonstration await next(); }; // 1. Generate signature for client-side upload app.get('/generate-signature', authMiddleware, (c) => { try { const timestamp = Math.round(new Date().getTime() / 1000); const paramsToSign = { timestamp: timestamp }; const signature = cloudinary.utils.api_sign_request( paramsToSign, process.env.CLOUDINARY_API_SECRET ); return c.json({ success: true, signature: signature, timestamp: timestamp, api_key: process.env.CLOUDINARY_API_KEY, }); } catch (error) { console.error('Signature Generation Error:', error); return c.json({ success: false, error: 'Failed to generate signature' }, 500); } }); // 2. Save metadata after client confirms successful upload to Cloudinary app.post('/save-metadata', authMiddleware, async (c) => { try { const userId = c.get('userId'); // Client sends metadata received from Cloudinary after upload const { public_id, secure_url, resource_type } = await c.req.json(); if (!public_id || !secure_url || !resource_type) { throw new Error('public_id, secure_url, and resource_type are required'); } // Insert metadata into Neon database await sql` INSERT INTO cloudinary_files (public_id, media_url, resource_type, user_id) VALUES (${public_id}, ${secure_url}, ${resource_type}, ${userId}) `; console.log(`Metadata saved for Cloudinary asset: ${public_id}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes the Neon client (`sql`), Hono (`app`), and configures the Cloudinary Node.js SDK using environment variables. 2. **Authentication:** Includes a placeholder `authMiddleware`. **Replace this with your actual user authentication logic.** 3. **API endpoints:** - **`/generate-signature` (GET):** Creates a current `timestamp`. Uses `cloudinary.utils.api_sign_request` with the parameters to sign (at minimum, the timestamp) and your `API Secret` to generate a `signature`. It returns the `signature`, `timestamp`, and your `API Key` to the client. These are needed for the client's direct upload request to Cloudinary. - **`/save-metadata` (POST):** Called by the client _after_ a successful direct upload to Cloudinary. The client sends the relevant asset metadata received from Cloudinary (`public_id`, `secure_url`, `resource_type`). The endpoint saves this information, along with the `userId`, into the `cloudinary_files` table in Neon. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), the official [`cloudinary`](https://pypi.org/project/cloudinary/) Python SDK, and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask cloudinary psycopg2-binary python-dotenv ``` Create a `.env` file with your credentials: ```env # Cloudinary Credentials CLOUDINARY_CLOUD_NAME=your_cloud_name CLOUDINARY_API_KEY=your_api_key CLOUDINARY_API_SECRET=your_api_secret # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import time import cloudinary import cloudinary.utils import psycopg2 from dotenv import load_dotenv from flask import Flask, jsonify, request load_dotenv() cloudinary.config( cloud_name=os.getenv("CLOUDINARY_CLOUD_NAME"), api_key=os.getenv("CLOUDINARY_API_KEY"), api_secret=os.getenv("CLOUDINARY_API_SECRET"), secure=True, ) app = Flask(__name__) # Use a global PostgreSQL connection pool in production instead of connecting per request def get_db_connection(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate signature for client-side upload @app.route("/generate-signature", methods=["GET"]) def generate_signature_route(): try: user_id = get_authenticated_user_id(request) if not user_id: return jsonify({"success": False, "error": "Unauthorized"}), 401 timestamp = int(time.time()) params_to_sign = {"timestamp": timestamp} signature = cloudinary.utils.api_sign_request( params_to_sign, os.getenv("CLOUDINARY_API_SECRET") ) return ( jsonify( { "success": True, "signature": signature, "timestamp": timestamp, "api_key": os.getenv("CLOUDINARY_API_KEY"), } ), 200, ) except Exception as e: print(f"Signature Generation Error: {e}") return jsonify({"success": False, "error": "Failed to generate signature"}), 500 # 2. Save metadata after client confirms successful upload to Cloudinary @app.route("/save-metadata", methods=["POST"]) def save_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) if not user_id: return jsonify({"success": False, "error": "Unauthorized"}), 401 # Client sends metadata received from Cloudinary after upload data = request.get_json() public_id = data.get("public_id") secure_url = data.get("secure_url") resource_type = data.get("resource_type") if not public_id or not secure_url or not resource_type: raise ValueError("public_id, secure_url, and resource_type are required") # Insert metadata into Neon database conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO cloudinary_files (public_id, media_url, resource_type, user_id) VALUES (%s, %s, %s, %s) """, (public_id, secure_url, resource_type, user_id), ) conn.commit() print(f"Metadata saved for Cloudinary asset: {public_id}") return jsonify({"success": True}), 201 except (psycopg2.Error, ValueError) as e: print(f"Metadata Save Error: {e}") return ( jsonify({"success": False, "error": "Failed to save metadata"}), 500, ) except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": app.run(port=3000, debug=True) ``` **Explanation** 1. **Setup:** Initializes Flask (`app`), the database connection function, and configures the Cloudinary Python SDK using environment variables. 2. **Authentication:** Includes a placeholder `get_authenticated_user_id` function. **Replace this with your actual user authentication logic.** 3. **API endpoints:** - **`/generate-signature` (GET):** Gets the current `timestamp`. Uses `cloudinary.utils.api_sign_request` with the parameters to sign and your `API Secret` to generate the `signature`. Returns the `signature`, `timestamp`, and `API Key` to the client. - **`/save-metadata` (POST):** Called by the client _after_ a successful direct upload to Cloudinary. It receives asset metadata from the client, validates required fields (`public_id`, `secure_url`, `resource_type`), and saves this along with the `userId` into the `cloudinary_files` table using `psycopg2`. 4. **Database Connection:** The example shows creating a new connection per request. In production, use a global connection pool for better performance. ## Testing the upload workflow This workflow involves getting a signature from your backend, using it to upload directly to Cloudinary, and then notifying your backend. 1. **Get signature and parameters:** Send a `GET` request to your backend's `/generate-signature` endpoint. ```bash curl -X GET http://localhost:3000/generate-signature ``` **Expected response:** A JSON object with the `signature`, `timestamp`, and `api_key`. ```json { "success": true, "signature": "a1b2c3d4e5f6...", "timestamp": 1713999600, "api_key": "YOUR_CLOUDINARY_API_KEY" } ``` 2. **Upload file directly to Cloudinary:** Use the obtained `signature`, `timestamp`, `api_key`, and the file path to send a `POST` request with `multipart/form-data` directly to the Cloudinary Upload API. The URL includes your **Cloud Name**. ```bash curl -X POST https://api.cloudinary.com/v1_1//image/upload \ -F "file=@/path/to/your/test-image.jpg" \ -F "api_key=" \ -F "timestamp=" \ -F "signature=" ``` > If uploading a video, change the endpoint in the URL from `/image/upload` to `/video/upload`. **Expected response (from Cloudinary):** A successful upload returns a JSON object with metadata about the uploaded asset. ```json { "asset_id": "...", "public_id": "", "version": 1713999601, "version_id": "...", "signature": "...", "width": 800, "height": 600, "format": "jpg", "resource_type": "image", "created_at": "2025-04-24T05:37:06Z", "tags": [], "bytes": 123456, "type": "upload", "etag": "...", "placeholder": false, "url": "http://res.cloudinary.com//image/upload/v1713999601/sample_image_123.jpg", "secure_url": "https://res.cloudinary.com//image/upload/v1713999601/sample_image_123.jpg", "folder": "", "original_filename": "test-image", "api_key": "YOUR_CLOUDINARY_API_KEY" } ``` > Note the `public_id`, `secure_url`, and `resource_type` in the response. These are needed for the next step. 3. **Save metadata:** Send a `POST` request to your backend's `/save-metadata` endpoint with the key details received from Cloudinary in Step 2. ```bash curl -X POST http://localhost:3000/save-metadata \ -H "Content-Type: application/json" \ -d '{ "public_id": "", "secure_url": "", "resource_type": "" }' ``` **Expected response (from your backend):** ```json { "success": true } ``` **Expected outcome:** - The file is successfully uploaded to your Cloudinary account (visible in the Media Library). - A new row corresponding to the uploaded asset exists in your `cloudinary_files` table in Neon. ## Accessing file metadata and files With metadata stored in Neon, your application can retrieve references to the media hosted on Cloudinary. Query the `cloudinary_files` table from your application's backend whenever you need to display or link to uploaded files. **Example SQL query:** Retrieve media files associated with a specific user: ```sql SELECT id, public_id, -- Cloudinary Public ID media_url, -- HTTPS URL for the asset resource_type, user_id, upload_timestamp FROM cloudinary_files WHERE user_id = 'user_123' AND resource_type = 'image'; -- Use actual user ID & desired type ``` **Using the data:** - The query returns metadata stored in Neon. - The `media_url` is the direct CDN link to the asset. - **Cloudinary transformations:** Cloudinary excels at on-the-fly transformations. You can manipulate the asset by modifying the `media_url`. Parameters are inserted between the `/upload/` part and the version/public_id part of the URL. For example, to get a 300px wide, cropped version: `https://res.cloudinary.com//image/upload/w_300,c_fill/v/.`. Explore the extensive [Cloudinary transformation documentation](https://cloudinary.com/documentation/image_transformations). This pattern separates media storage, processing, and delivery (handled by Cloudinary) from structured metadata management (handled by Neon). ## Resources - [Cloudinary documentation](https://cloudinary.com/documentation) - [Cloudinary Upload API reference](https://cloudinary.com/documentation/image_upload_api_reference) - [Neon Documentation](https://neon.com/docs/introduction) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-consumption-limits.txt # Configure consumption limits > The document outlines the steps for configuring consumption limits in Neon, enabling users to manage and control resource usage effectively within their database environments. ## Source - [Configure consumption limits HTML](https://neon.com/docs/guides/consumption-limits): The original HTML version of this documentation When setting up your integration's billing solution with Neon, you may want to impose some hard limits on how much storage or compute resources a given project can consume. For example, you may want to cap how much usage your free plan users can consume versus pro or enterprise users. With the Neon API, you can use the `quota` key to set usage limits for a variety of consumption metrics. These limits act as thresholds after which all active computes for a project are [suspended](https://neon.com/docs/guides/consumption-limits#suspending-active-computes). ## Metrics and quotas By default, Neon tracks a variety of consumption metrics at the project level. If you want to set quotas (max limits) for these metrics, you need to explicitly [configure](https://neon.com/docs/guides/consumption-limits#configuring-quotas) them. ### Available metrics Here are the relevant metrics that you can track in order to understand your users' current consumption levels. #### Project-level metrics - `active_time_seconds` - `compute_time_seconds` - `written_data_bytes` - `data_transfer_bytes` These consumption metrics represent total cumulative usage across all branches and computes in a given project, accrued so far in a given monthly billing period. Metrics are refreshed on the first day of the following month, when the new billing period starts. #### Branch-level metric There is an additional value that you also might want to track: `logical_size`, which gives you the current size of a particular branch. Neon updates all metrics every 15 minutes but it could take up to 1 hour before they are reportable. To find the current usage level for any of these metrics, see [querying metrics](https://neon.com/docs/guides/consumption-limits#querying-metrics-and-quotas). ### Corresponding quotas You can set quotas for these consumption metrics per project using the `quota` settings object in the [Create project](https://api-docs.neon.tech/reference/createproject) or [Update project](https://api-docs.neon.tech/reference/updateproject) API. The `quota` object includes an array of parameters used to set threshold limits. Their names generally match their corresponding metric: - `active_time_seconds` — Sets the maximum amount of time your project's computes are allowed to be active during the current billing period. It excludes time when computes are in an idle state due to [scale to zero](https://neon.com/docs/reference/glossary#scale-to-zero). - `compute_time_seconds` — Sets the maximum amount of CPU seconds allowed in total across all of a project's computes. This includes any computes deleted during the current billing period. Note that the larger the compute size per endpoint, the faster the project consumes `compute_time_seconds`. For example, 1 second at .25 vCPU costs .25 compute seconds, while 1 second at 4 vCPU costs 4 compute seconds. | vCPUs | active_time_seconds | compute_time_seconds | |:-------|:----------------------|:-----------------------| | 0.25 | 1 | 0.25 | | 4 | 1 | 4 | - `written_data_bytes` — Sets the maximum amount of data in total, measured in bytes, that can be written across all of a project's branches for the month. - `data_transfer_bytes` — Sets the maximum amount of egress data, measured in bytes, that can be transferred out of Neon from across all of a project's branches using the proxy. There is one additional `quota` parameter, `logical_size_bytes`, which applies to individual branches, not to the overall project. You can use `logical_size_bytes` to set the maximum size (measured in bytes) that any one individual branch is allowed to reach. Once this threshold is met, the compute for that particular branch (and _only_ that particular branch) is suspended. Note that this limit is _not_ refreshed once per month: it is a strict size limit that applies for the life of the branch. ### Sample quotas Let's say you want to set limits for an application with two tiers, Trial and Pro, you might set limits like the following: | Parameter (project) | Trial (.25 vCPU) | Pro (max 4 vCPU) | | -------------------- | -------------------------------- | ------------------------------------------------- | | active_time_seconds | 633,600 (business month 22 days) | 2,592,000 (30 days) | | compute_time_seconds | 158,400 (approx 44 hours) | 10,368,000 (4 times the active hours for 4 vCPUs) | | written_data_bytes | 1,000,000,000 (approx. 1 GB) | 50,000,000,000 (approx. 50 GB) | | data_transfer_bytes | 500,000,000 (approx. 500 MB) | 10,000,000,000 (approx. 10 GB) | | Parameter (branch) | Trial | Pro | | ------------------ | ----------------------------- | ------------------------------ | | logical_size_bytes | 100,000,000 (approx. 100 MiB) | 10,000,000,000 (approx. 10 GB) | ### Guidelines Generally, the most effective quotas for controlling spend per project are those controlling maximum compute (`active_time_seconds` and `compute_time_seconds`) and maximum written storage (`written_data_bytes`). In practice, it is possible that `data_transfer_bytes` could introduce unintended logical constraints against your usage. For example, let's say you want to run a cleanup operation to reduce your storage. If part of this cleanup operation involves moving data across the network (for instance, to create an offsite backup before deletion), the `data_transfer_bytes` limit could prevent you from completing the operation — an undesirable situation where two measures meant to control cost interfere with one another. ### Neon default limits In addition to the configurable limits that you can set, Neon also sets certain branch size limits by default. You might notice these limits in a [Get Project](https://neon.com/docs/guides/consumption-limits#retrieving-details-about-a-project) response: - `branch_logical_size_limit` (MiB) - `branch_logical_size_limit_bytes`(Bytes) These limits are not directly configurable. You can query the limits by running the [Get project details](https://api-docs.neon.tech/reference/getproject) or [Get project list](https://api-docs.neon.tech/reference/listprojects) endpoints. ## Suspending active computes _**What happens when a quota is met?**_ When any configured metric reaches its quota limit, all active computes for that project are automatically suspended. It is important to understand, this suspension is persistent. It works differently than the inactivity-based [scale to zero](https://neon.com/docs/guides/scale-to-zero-guide), where computes restart at the next interaction: this suspend will _not_ restart at the next API call or incoming connection. If you don't take explicit action otherwise, the suspension remains in place until the end of the current billing period starts (`consumption_period_end`). See [Querying metrics and quotas](https://neon.com/docs/guides/consumption-limits#querying-metrics-and-quotas) to find the reset date, billing period, and other values related to a project's consumption. **Note**: Neon tracks these consumption metrics on a monthly cycle. If you want to track metrics on a different cycle, you need to take snapshots of your metrics at the desired interval and store the data externally. You can also use the [Consumption API](https://neon.com/docs/guides/consumption-limits#retrieving-metrics-for-all-projects) to collect metrics from across a range of billing periods. ## Configuring quotas You can set quotas using the Neon API either in a `POST` when you create a project or a `PATCH` to update an existing project: - [Set quotas when you create the project](https://neon.com/docs/guides/consumption-limits#set-quotas-when-you-create-the-project) - [Update an existing project](https://neon.com/docs/guides/consumption-limits#update-an-existing-project) ### Set quotas when you create the project For performance reasons, you might want to configure these quotas at the same time that you create a new project for your user using the [Create a project](https://api-docs.neon.tech/reference/createproject) API, reducing the number of API calls you need to make. Here is a sample `POST` in `curl` that creates a new project called `UserNew` and sets the `active_time_seconds` quota to a total allowed time of 10 hours (36,000 seconds) for the month, and a total allowed `compute_time_seconds` set to 2.5 hours (9,000 seconds) for the month. This 4:1 ratio between active and compute time is suitable for a fixed compute size of 0.25 vCPU. ```bash {11,12} curl --request POST \ --url https://console.neon.tech/api/v2/projects \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "project": { "settings": { "quota": { "active_time_seconds": 36000, "compute_time_seconds": 9000 } }, "pg_version": 15, "name": "UserProject" } } ' | jq ``` ### Update an existing project If you need to change the quota limits for an existing project — for example, if a user switches their plan to a higher usage tier — you can reset those limits via `PATCH` request. See [Update a project](https://api-docs.neon.tech/reference/updateproject) in the Neon API. Here is a sample `PATCH` that updates both the `active_time_seconds` and `compute_time_seconds` quotas to 30 hours (108,000): ```bash {11,12} curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/[project_ID]\ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "project": { "settings": { "quota": { "active_time_seconds": 108000, "compute_time_seconds": 108000 } } } } ' | jq ``` ## Querying metrics and quotas You can use the Neon API to retrieve consumption metrics for your organization and projects using these endpoints: | Endpoint | Description | Plan Availability | Docs | | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------------------------------------------------------------------------------------------------------------------------- | | [Aggregated account metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) | Aggregates the metrics from all projects in an account into a single cumulative number for each metric | Scale plan only | [Get account-level aggregated metrics](https://neon.com/docs/guides/consumption-metrics#get-account-level-aggregated-metrics) | | [Granular metrics per project](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) | Provides detailed metrics for each project in an account at a specified granularity level (e.g., hourly, daily, monthly) | Scale plan only | [Get granular project-level metrics for the account](https://neon.com/docs/guides/consumption-metrics#get-granular-project-level-metrics-for-your-account) | ## Resetting a project after suspend Projects remain suspended until the next billing period. It is good practice to notify your users when they are close to reaching a limit; if the user is then suspended and loses access to their database, it will not be unexpected. If you have configured no further actions, the user will have to wait until the next billing period starts to resume usage. Alternatively, you can actively reset a suspended compute by changing the impacted quota to `0`: this effectively removes the limit entirely. You will need to reset this quota at some point if you want to maintain limits. ### Using quotas to actively suspend a user If you want to suspend a user for any reason — for example, suspicious activity or payment issues — you can use these quotas to actively suspend a given user. For example, setting `active_time_limit` to a very low threshold (e.g., `1`) will force a suspension if the user has 1 second of active compute for that month. To remove this suspension, you can set the threshold temporarily to `0` (infinite) or some value larger than their currently consumed usage. ## Other consumption related settings In addition to setting quota limits against the project as a whole, there are other sizing-related settings you might want to use to control the amount of resources any particular endpoint is able to consume: - `autoscaling_limit_min_cu` — Sets the minimium compute size for the endpoint. The default minimum is .25 vCPU but can be increased if your user's project could benefit from a larger compute start size. - `autoscaling_limit_max_cu` — Sets a hard limit on how much compute an endpoint can consume in response to increased demand. For more info on min and max cpu limits, see [Autoscaling](https://neon.com/docs/guides/autoscaling-guide). - `suspend_timeout_seconds` — Sets how long an endpoint's allotted compute will remain active with no current demand. After the timeout period, the endpoint is suspended until demand picks up. For more info, see [Scale to Zero](https://neon.com/docs/guides/scale-to-zero-guide). There are several ways you can set these endpoint settings using the Neon API: you can set project-level defaults that apply for any new computes created in the project, you can define the endpoint settings when creating a new branch, or you can adjust these settings when creating or updating an endpoint for an existing branch. See these sample CURL requests for each method. Tab: Project In this sample, we are setting defaults for all new endpoints created in the project as a whole. The minimum compute size is at **1 vCPU**, the max size at **3 vCPU**, and a 10 minute (**600 seconds**) inactivty period before the endpoint is suspended. These default values are set in the `default_endpoint_settings` object. ```bash {9-12} curl --request POST \ --url https://console.neon.tech/api/v2/projects \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "project": { "default_endpoint_settings": { "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 3, "suspend_timeout_seconds": 600 }, "pg_version": 15 } } ' | jq ``` Tab: Branch In this POST request, we are creating a new endpoint at the same time that we create our new branch called `Development`. We've sized the endpoint at **1 vCPU** min, **3 vCPU** max, and with a timeout period of 10 minutes (**600 seconds**). ```bash {14-16} curl --request POST \ --url https://console.neon.tech/api/v2/projects/noisy-pond-28482075/branches \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "branch": { "name": "Development" }, "endpoints": [ { "type": "read_write", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 3, "suspend_timeout_seconds": 600 } ] } ' | jq ``` Tab: Endpoint In this example, we are creating a new endpoint for an already existing branch with ID `br-wandering-field-12345678`, with a min compute of **2 vCPU**, a max of **6 vCPU**, and a suspend timeout of 5 minutes (**300** seconds). ```bash {10-13} curl --request POST \ --url https://console.neon.tech/api/v2/projects/noisy-pond-28482075/endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_write", "autoscaling_limit_min_cu": 2, "autoscaling_limit_max_cu": 6, "suspend_timeout_seconds": 300, "branch_id": "br-wandering-field-12345678" } } ' | jq ``` --- # Source: https://neon.com/llms/guides-consumption-metrics.txt # Querying consumption metrics > The document "Querying consumption metrics" guides Neon users on how to access and interpret consumption metrics using SQL queries to monitor resource usage and optimize database performance. ## Source - [Querying consumption metrics HTML](https://neon.com/docs/guides/consumption-metrics): The original HTML version of this documentation **Note**: These consumption metrics apply to Neon's [legacy pricing plans](https://neon.com/docs/introduction/legacy-plans). Metrics for Neon's current [usage-based pricing plans](https://neon.com/docs/introduction/about-billing) will be added in a future update. Using the Neon API, you can query a range of account and project metrics to help gauge your resource consumption. Here are the different ways to retrieve these metrics, depending on how you want them aggregated or broken down: | Endpoint | Description | Plan availability | | ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ----------------- | | [Get account consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) | Aggregates all metrics from all projects in an account into a single cumulative number for each metric | Scale plan only | | [Get consumption metrics for each project](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) | Provides detailed metrics for each project in an account at a specified granularity level (e.g., hourly, daily, monthly) | Scale plan only | ## Get account-level aggregated metrics Using the [Get account consumption metrics API](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount), you can find total usage across all projects in your organization. This provides a comprehensive view of consumption metrics accumulated for the billing period. Here is the URL in the Neon API where you can get account-level metrics: ```bash https://console.neon.tech/api/v2/consumption_history/account ``` This API endpoint accepts the following query parameters: `from`, `to`, `granularity`, `org_id`, and `include_v1_metrics`. ### Choosing your account Include the unique `org_id` for your organization to retrieve account metrics for that specific organization. If not specified, metrics for your personal account will be returned. For more information about this upcoming feature, see [Organizations](https://neon.com/docs/manage/organizations). ### Set a date range for granular results You can set `from` and `to` query parameters, plus a level of granularity to define a time range that can span across multiple billing periods. - `from` — Sets the start date and time of the time period for which you are seeking metrics. - `to` — Sets the end date and time for the interval for which you desire metrics. - `granularity` — Sets the level of granularity for the metrics, such as `hourly`, `daily`, or `monthly`. The response is organized by periods and consumption data within the specified time range. See [Details on setting a date range](https://neon.com/docs/guides/consumption-metrics#details-on-setting-a-date-range) for more info. ## Get granular project-level metrics for your account You can also get similar daily, hourly, or monthly metrics across a selected time period, but broken out for each individual project that belongs to your organization. Using the [Retrieve project consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) endpoint, let's use the same start date, end date, and level of granularity as our account-level request: hourly metrics between June 30th and July 2nd, 2024. ```shouldWrap curl --request GET \ --url 'https://console.neon.tech/api/v2/consumption_history/projects?limit=10&from=2024-06-30T00%3A00%3A00Z&to=2024-07-02T00%3A00%3A00Z&granularity=hourly&org_id=org-ocean-art-12345678' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` Details: Response body For attribute definitions, find the [Retrieve project consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```shouldWrap { "projects": [ { "project_id": "random-project-123456", "periods": [ { "period_id": "random-period-abcdef", "consumption": [ { "timeframe_start": "2024-06-30T00:00:00Z", "timeframe_end": "2024-06-30T01:00:00Z", "active_time_seconds": 147472, "compute_time_seconds": 43222, "written_data_bytes": 112730864, "synthetic_storage_size_bytes": 37000959232 }, { "timeframe_start": "2024-07-01T00:00:00Z", "timeframe_end": "2024-07-01T01:00:00Z", "active_time_seconds": 1792, "compute_time_seconds": 533, "written_data_bytes": 0, "synthetic_storage_size_bytes": 0 } // ... More consumption data ] }, { "period_id": "random-period-ghijkl", "consumption": [ { "timeframe_start": "2024-07-01T09:00:00Z", "timeframe_end": "2024-07-01T10:00:00Z", "active_time_seconds": 150924, "compute_time_seconds": 44108, "written_data_bytes": 114912552, "synthetic_storage_size_bytes": 36593552376 } // ... More consumption data ] } // ... More periods ] } // ... More projects ] } ``` The response is organized by periods and consumption data within the specified time range. See [Details on setting a date range](https://neon.com/docs/guides/consumption-metrics#details-on-setting-a-date-range) for more info. ### Pagination To control pagination (number of results per response), you can include these query parameters: - `limit` — sets the number of project objects to be included in the response. - `cursor` — by default, the response uses the project `id` from the last project in the list as the `cursor` value (included in the `pagination` object at the end of the response). Generally, it is up to the application to collect and use this cursor value when setting up the next request. See [Details on pagination](https://neon.com/docs/guides/consumption-metrics#details-on-pagination) for more info. ## Details on setting a date range This section applies to the following metrics output types: [Account-level aggregated metrics](https://neon.com/docs/guides/consumption-metrics#get-account-level-aggregated-metrics), and [Granular project-level metrics for your account](https://neon.com/docs/guides/consumption-metrics#get-granular-project-level-metrics-for-your-account). You can set `from` and `to` query parameters, plus a level of granularity to define a time range that can span across multiple billing periods. - `from` — Sets the start date and time of the time period for which you are seeking metrics. - `to` — Sets the end date and time for the interval for which you desire metrics. - `granularity` — Sets the level of granularity for the metrics, such as `hourly`, `daily`, or `monthly`. The response is organized by periods and consumption data within the specified time range. Here is an example query that returns metrics from June 30th to July 2nd, 2024. Time values must be provided in RFC 3339 format. You can use this [timestamp converter](https://it-tools.tech/date-converter). ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/consumption_history/account?from=2024-06-30T15%3A30%3A00Z&to=2024-07-02T15%3A30%3A00Z&granularity=hourly&org_id=org-ocean-art-12345678' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` And here is a sample response: Details: Response body For attribute definitions, find the [Retrieve account consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "periods": [ { "period_id": "random-period-abcdef", "consumption": [ { "timeframe_start": "2024-06-30T15:00:00Z", "timeframe_end": "2024-06-30T16:00:00Z", "active_time_seconds": 147452, "compute_time_seconds": 43215, "written_data_bytes": 111777920, "synthetic_storage_size_bytes": 41371988928 }, { "timeframe_start": "2024-06-30T16:00:00Z", "timeframe_end": "2024-06-30T17:00:00Z", "active_time_seconds": 147468, "compute_time_seconds": 43223, "written_data_bytes": 110483584, "synthetic_storage_size_bytes": 41467955616 } // ... More consumption data ] }, { "period_id": "random-period-ghijkl", "consumption": [ { "timeframe_start": "2024-07-01T00:00:00Z", "timeframe_end": "2024-07-01T01:00:00Z", "active_time_seconds": 145672, "compute_time_seconds": 42691, "written_data_bytes": 115110912, "synthetic_storage_size_bytes": 42194712672 }, { "timeframe_start": "2024-07-01T01:00:00Z", "timeframe_end": "2024-07-01T02:00:00Z", "active_time_seconds": 147464, "compute_time_seconds": 43193, "written_data_bytes": 110078200, "synthetic_storage_size_bytes": 42291858520 } // ... More consumption data ] } // ... More periods ] } ``` ## Metric definitions - **active_time_seconds** — The number of seconds the project's computes have been active during the period. - **compute_time_seconds** — The number of CPU seconds used by the project's computes, including computes that have been deleted; for example: - A compute that uses 1 CPU for 1 second is equal to `compute_time=1`. - A compute that uses 2 CPUs simultaneously for 1 second is equal to `compute_time=2`. - **written_data_bytes** — The total amount of data written to all of a project's branches. - **synthetic_storage_size_bytes** — The total space occupied in storage. Synthetic storage size combines the logical data size and Write-Ahead Log (WAL) size for all branches. ## Details on pagination This section applies to the following metrics output: [Granular project-level metrics for your account](https://neon.com/docs/guides/consumption-metrics#get-granular-project-level-metrics-for-your-account). To control pagination (number of results per response), you can include these query parameters: - `limit` — sets the number of project objects to be included in the response - `cursor` — by default, the response uses the project `id` from the last project in the list as the `cursor` value (included in the `pagination` object at the end of the response). Generally, it is up to the application to collect and use this cursor value when setting up the next request. Here is an example `GET` request asking for the next 10 projects, starting with project id `divine-tree-77657175`: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/consumption_history/projects?cursor=divine-tree-77657175&limit=100&granularity=daily' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` **Note**: To learn more about using pagination to control large response sizes, the [Keyset pagination](https://learn.microsoft.com/en-us/ef/core/querying/pagination#keyset-pagination) page in the Microsoft docs gives a helpful overview. ## Consumption polling FAQ As an integrator of Neon or paid plan customer, you may have questions related to polling Neon's consumption APIs. We've provided answers to frequently asked questions here. ### How often can you poll consumption data for usage reporting and billing? Neon's consumption data is updated approximately every 15 minutes, so a minimum interval of 15 minutes between calls to our consumption APIs is recommended. ### What is the rate limit for Neon's consumption APIs? Neon's consumption APIs, [Get account consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) and [Get consumption metrics for each project](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject), are rate-limited to about 30 requests per minute per account. Both APIs share the same rate limiter, so requests to either endpoint count toward the limit. Neon's consumption APIs use a **token bucket** rate-limiting approach, which refills at a steady rate while allowing short bursts within the bucket size. This behaves more like a sliding window rather than a fixed reset every minute. For more details about this approach, see [Token bucket](https://en.wikipedia.org/wiki/Token_bucket). ### How often should consumption data be polled to report usage to customers? As mentioned above, usage data can be pulled every 15 minutes, but integrators of Neon are free to choose their own reporting interval based on their requirements. ### How often should consumption data be polled to invoice end users? Neon does not dictate how integrators of Neon bill their users. Integrators of Neon can use the data retrieved from the consumption API to generate invoices according to their own billing cycles and preferences. ### Does consumption polling wake up computes? Neon's consumption polling APIs do not wake computes that have been suspended due to inactivity. Therefore, calls to Neon's consumption APIs will not increase your users' consumption. --- # Source: https://neon.com/llms/guides-database-per-user.txt # Neon for Database-per-user > The document outlines how to implement a database-per-user architecture using Neon, detailing steps for creating isolated databases for individual users to enhance data security and management. ## Source - [Neon for Database-per-user HTML](https://neon.com/docs/guides/database-per-user): The original HTML version of this documentation With its serverless and API-first nature, Neon is an excellent choice for building database-per-user applications (or apps where each user/customer has their own Postgres database). Neon is particularly well-suited for architectures that prioritize maximum database isolation, achieving the equivalent of instance-level isolation. This guide will help you get started with implementing this architecture. ## Multi-tenant architectures in Postgres In a multi-tenant architecture, a single system supports multiple users (tenants), each with access to manage their own data. In a database like Postgres, this setup requires careful structuring to keep each tenant's data private, secure, and isolated—all while remaining efficient to manage and scale. Following these principles, there are three primary routes you could follow to implement multi-tenant architectures in Postgres: - Creating one separate database per user (the focus of this guide); - Creating one schema-per-user, within the same database; - And keeping your tenants separate within a shared schema. To better situate our use case, let's briefly outline the differences between these architectures: ### Database-per-user In a database-per-user design, each user's data is fully isolated in its own database, eliminating any risk of data overlap. This setup is straightforward to design and highly secure. However, implementing this in managed Postgres databases has traditionally been challenging. For users of AWS RDS or similar services, two primary options have existed for achieving a database-per-user design: 1. **Using one large instance to host multiple user databases.** This option can be tempting due to the reduced number of instances to manage and (probably) lower infrastructure costs. But the trade-off is a higher demand for DBA expertise—this is a design that requires careful planning, especially at scale. Hosting all users on shared resources can impact performance, particularly if users have varying workload patterns, and if the instance fails, all customers are affected. Migrations and upgrades also become complex. 2. **Handling multiple instances, each hosting a single production database.** In this scenario, each instance scales independently, preventing resource competition between users and minimizing the risk of widespread failures. This is a much simpler design from the perspective of the database layer, but managing hundreds of instances in AWS can get very costly and complex. As the number of instances grows into the thousands, management becomes nearly impossible. As we'll see later throughout this guide, Neon offers a third alternative by providing a logical equivalent to the instance-per-customer model with near-infinite scalability, without the heavy DevOps overhead. This solution involves creating one Neon project per customer. ### Schema-per-user But before focusing on database-per-user, let's briefly cover another multi-tenancy approach in Postgres: the schema-per-user model. Instead of isolating data by database, this design places all users in a single database, with a unique schema for each. In Neon, we generally don't recommend this approach for SaaS applications, unless this is a design you're already experienced with. This approach doesn't reduce operational complexity or costs if compared to the many-databases approach, but it does introduce additional risks; it also limits the potential of Neon features like instant Point-in-Time Recovery (PITR), which in a project-per-customer model allows you to restore customer databases independently without impacting the entire fleet's operations. More about this later. ### Shared schema Lastly, Postgres's robustness actually makes it possible to ensure tenant isolation within a shared schema. In this model, all users' data resides within the same tables, with isolation enforced through foreign keys and row-level security. While this is a common choice—and can be a good starting point if you're just beginning to build your app—we still recommend the project-per-user route if possible. Over time, as your app scales, meeting requirements within a shared schema setup becomes increasingly challenging. Enforcing compliance and managing access restrictions at the schema level grows more complex as you add more users. You'll also need to manage very large Postgres tables, as all customer data is stored in the same tables. As these tables grow, additional Postgres fine-tuning will be required to maintain performance. ## Setting up Neon for Database-per-user Now that we've reviewed your options, let's focus on the design choice we recommend for multi-tenancy in Neon: creating isolated databases for each user, with each database hosted on its own project. ### Database-per-user = Project-per-user We recommend setting up one project per user, rather than, for example, using a branch per customer. A Neon [project](https://neon.com/docs/manage/overview) serves as the logical equivalent of an "instance" but without the management overhead. Here's why we suggest this design: - **Straightforward scalability** Instead of learning how to handle large Postgres databases, this model allows you to simply create a new project when a user joins—something that can be handled automatically via the Neon API. This approach is very cost-effective, as we'll see below. Databases remain small, keeping management at the database level simple. - **Better performance with lower costs** This design is also highly efficient in terms of compute usage. Each project has its own dedicated compute, which scales up and down independently per customer; a spike in usage for one tenant doesn't affect others, and inactive projects remain practically free. - **Complete data isolation** By creating a dedicated project for each customer, their data remains completely separate from others, ensuring the highest level of security and privacy. - **Easier regional compliance** Each Neon project can be deployed in a specific region, making it easy to host customer data closer to their location. - **Per-customer PITR** Setting up a project per customer allows you to run [PITR on individual customers](https://neon.com/docs/guides/branch-restore) instantly, without risking disruption to your entire fleet. ## Managing many projects As you scale, following a project-per-user design means eventually managing thousands of Neon projects. This might sound overwhelming, but it's much simpler in practice than it seems—some Neon users [manage hundreds of thousands of projects](https://neon.com/blog/how-retool-uses-retool-and-the-neon-api-to-manage-300k-postgres-databases) with just one engineer. Here's why that's possible: - **You can manage everything with the Neon API** The API allows you to automate every step of project management, including setting resource limits per customer and configuring resources. - **No infrastructure provisioning** New Neon projects are ready in milliseconds. You can set things up to create new projects instantly when new customers join, without the need to manually pre-provision instances. - **You only pay for active projects** Empty projects are virtually free thanks to Neon's [scale-to-zero](https://neon.com/docs/guides/auto-suspend-guide) feature. If, on a given day, you have a few hundred projects that were only active for a few minutes, that's fine—your bill won't suffer. - **Subscription plans** To support this usage pattern, our paid plans include a generous number of projects. ### Dev/test environments In Neon, [database branching](https://neon.com/docs/introduction/branching) is a powerful feature that enables you to create fast, isolated copies of your data for development and testing. You can use child branches as ephemeral environments that mirror your main testing database but operate independently, without adding to storage costs. This feature is a game-changer for dev/test workflows, as it reduces the complexity of managing multiple test databases while lowering non-prod costs significantly. To handle [dev/test](https://neon.com/use-cases/dev-test) in a project-per-user design, consider creating a dedicated Neon project as your non-prod environment. This Neon project can serve as a substitute for the numerous non-prod instances you might maintain in RDS. The methodology: - **Within the non-prod project, load your testing data into the production branch.** This production branch will serve as the primary source for all dev/test environments. - **Create ephemeral environments via child branches.** For each ephemeral environment, create a child branch from the production branch. These branches are fully isolated in terms of resources and come with an up-to-date copy of your testing dataset. - **Automate the process.** Use CI/CD and automations to streamline your workflow. You can reset child branches with one click to keep them in sync with the production branch as needed, maintaining data consistency across your dev/test environments. ## Designing a Control Plane Once you have everything set up, as your number of projects grows, you might want to create a control plane to stay on top of everything in a centralized manner. ### The catalog database The catalog database is a centralized repository that tracks and manages all Neon projects and databases. It holds records for every Neon project your system creates. You can also use it to keep track of tenant-specific configurations, such as database names, regions, schema versions, and so on. You can set up your catalog database as a separate Neon project. When it's time to design its schema, consider these tips: - Use foreign keys to link tables like `project` and `payment` to `customer`. - Choose data types carefully: `citext` for case-insensitive text, `uuid` for unique identifiers to obscure sequence data, and `timestamptz` for tracking real-world time. - Track key operational data, like `schema_version`, in the `project` table. - Index wisely! While the catalog will likely remain smaller than user databases, it will grow—especially with recurring events like payments—so indexing is crucial for control plane performance at scale. - Start with essential data fields and plan for future extensions as needs evolve. - Standard Neon metadata (e.g., compute size, branch info) is accessible via the console. Avoid duplicating it in the catalog database unless separate access adds significant complexity. ### Automations To effectively scale a multi-tenant architecture, leveraging automation tools is essential. The Neon API will allow you to automate various tasks, such as creating and managing projects, setting usage limits, and configuring resources. Beyond the API, Neon offers several integrations to streamline your workflows: - **GitHub Actions** Neon's [GitHub integration](https://neon.com/docs/guides/neon-github-integration) allows you to automate database branching workflows directly from your repositories. By connecting a Neon project to a GitHub repository, you can set up actions that create or delete database branches in response to pull request events, facilitating isolated testing environments for each feature or bug fix. - **Vercel Integration** You can [connect your Vercel projects to Neon](https://neon.com/docs/guides/neon-github-integration), creating database branches for each preview deployment. - **CI/CD pipelines** By combining Neon branching into your CI/CD, you can simplify your dev/test workflows by creating and deleting ephemeral environments automatically as child branches. - **Automated backups to your own S3** If you must keep your own data copy, you can [schedule regular backups](https://neon.com/docs/manage/backups-aws-s3-backup-part-2) using tools like `pg_dump` in conjunction with GitHub Actions. ## The Application Layer Although the application layer isn't our main focus, a common question developers ask us when approaching a multi-tenant architecture is: _Do I deploy one application environment per database, or connect all databases to a single application environment?_ Both approaches are viable, each with its own pros and cons. ### Shared application environments #### Pros of shared environments - Managing a single application instance minimizes operational complexity. - Updates and new features are easy to implement since changes apply universally. - Operating one environment reduces infrastructure and maintenance costs. #### Cons of shared environments - A single application environment makes it difficult to offer tailored experiences for individual customers. - Compliance becomes challenging when users' databases span multiple regions. - Updates apply to all users simultaneously, which can be problematic for those needing specific software versions. - A single environment heightens the risk of data breaches, as vulnerabilities can impact all users. #### Advice - **Implement robust authorization** Ensure secure access as all users share the same application environment. - **Define user authentication and data routing** - Users provide their organization details during login. - Users access the application via an organization-specific subdomain. - The system identifies the user's organization based on their credentials. - **Monitor usage and performance** Regularly track application usage to prevent performance bottlenecks. - **Plan maintenance windows carefully** Minimize disruptions for all users by scheduling maintenance during low-usage periods. ### Isolated application environments In this architecture, each customer has instead a dedicated application environment alongside their own database. Similar to the shared environment option, this design has pros and cons: #### Pros of isolated environments - Since each customer can now have a unique application environment, it's easier to implement personalized features and configurations, to keep separate versions for particular customers, and so on. - Compliance is also simpler if you're handling multiple regions. Deploying the application in multiple regions can also help with latency. - This design also opens the door for customers to control their own upgrade schedules, e.g., via defining their own maintenance windows. #### Cons of isolated environments - This design has an obvious tradeoff: it comes with higher complexity of deployment, monitoring, and maintenance. - You'll need to think about how to route optimal resource utilization across multiple environments, and how to keep observability on-point to diagnose issues. - Operating separate environments for each customer might also lead to higher costs. #### Advice If you decide to implement isolated environments, here's some advice to consider: - Design your architecture to accommodate growth, even if your setup is small today. - Similarly as you're doing with Neon projects, take advantage of automation tools to streamline the creation and management of your application environments. - Set up proper monitoring to track key metrics across all environments. ## Migrating Schemas In a database-per-user design, it is common to have the same schema for all users/databases. Any changes to the user schema will most likely be rolled out to all individual databases simultaneously. In this section, we teach you how to use DrizzleORM, GitHub Actions, the Neon API, and a couple of custom template scripts to manage many databases using the same database schema. ### Example app To walk you through it, we've created example code [in this repository](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle). The example includes 4 Neon databases, all using Postgres 16 and all deployed to AWS us-east-1. The schema consists of three tables, `users`, `projects` and `tasks`. You can see the schema here: [schema.ts](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle/blob/main/src/db/schema.ts), and for good measure, here's the raw SQL equivalent: [schema.sql](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle/blob/main/schema.sql). This default schema is referenced by each of the `drizzle.config.ts` files that have been created for each customer. ### Workflow using Drizzle ORM and GitHub Actions #### Creating Neon projects via a CLI script Our example creates new Neon projects via the command line, using the following script: ```javascript // src/scripts/create.js import { Command } from 'commander'; import { createApiClient } from '@neondatabase/api-client'; import 'dotenv/config'; const program = new Command(); const neonApi = createApiClient({ apiKey: process.env.NEON_API_KEY, }); program.option('-n, --name ', 'Name of the company').parse(process.argv); const options = program.opts(); if (options.name) { console.log(`Company Name: ${options.name}`); (async () => { try { const response = await neonApi.createProject({ project: { name: options.name, pg_version: 16, region_id: 'aws-us-east-1', }, }); const { data } = response; console.log(data); } catch (error) { console.error('Error creating project:', error); } })(); } else { console.log('No company name provided'); } ``` This script utilizes the `commander` library to create a simple command-line interface (CLI) and the Neon API's `createProject` method to set up a new project. Ensure that your Neon API key is stored in an environment variable named `NEON_API_KEY`. To execute the script and create a new Neon project named "ACME Corp" with PostgreSQL version 16 in the aws-us-east-1 region, run: ```bash npm run create -- --name="ACME Corp" ``` In this example, the same approach was used to create the following projects: - ACME Corp - Payroll Inc - Finance Co - Talent Biz To interact with the Neon API, you'll need to generate an API key. For more information, refer to the Neon documentation on [creating an API key](https://api-docs.neon.tech/reference/createapikey). #### Generating a workflow to prepare for migrations ```javascript // src/scripts/generate.js import { existsSync, mkdirSync, writeFileSync } from 'fs'; import { execSync } from 'child_process'; import { createApiClient } from '@neondatabase/api-client'; import { Octokit } from 'octokit'; import 'dotenv/config'; import { encryptSecret } from '../utils/encrypt-secret.js'; import { drizzleConfig } from '../templates/drizzle-config.js'; import { githubWorkflow } from '../templates/github-workflow.js'; const octokit = new Octokit({ auth: process.env.PERSONAL_ACCESS_TOKEN }); const neonApi = createApiClient({ apiKey: process.env.NEON_API_KEY }); const repoOwner = 'neondatabase-labs'; const repoName = 'neon-database-per-tenant-drizzle'; let secrets = []; (async () => { // Ensure configs directory exists if (!existsSync('configs')) mkdirSync('configs'); // Fetch GitHub public key for encrypting secrets const { data: publicKeyData } = await octokit.request( `GET /repos/${repoOwner}/${repoName}/actions/secrets/public-key`, { headers: { 'X-GitHub-Api-Version': '2022-11-28' } } ); // List all Neon projects const { data: { projects }, } = await neonApi.listProjects(); await Promise.all( projects.map(async (project) => { const { id, name } = project; // Fetch database connection URI const { data: { uri }, } = await neonApi.getConnectionUri({ projectId: id, database_name: 'neondb', role_name: 'neondb_owner', }); // Prepare variables const safeName = name.replace(/\s+/g, '-').toLowerCase(); const path = `configs/${safeName}`; const file = 'drizzle.config.ts'; const envVarName = `${safeName.replace(/-/g, '_').toUpperCase()}_DATABASE_URL`; const encryptedValue = await encryptSecret(publicKeyData.key, uri); // Store environment variable name for later use secrets.push(envVarName); // Create project directory and config file if not present if (!existsSync(path)) mkdirSync(path); if (!existsSync(`${path}/${file}`)) { writeFileSync(`${path}/${file}`, drizzleConfig(safeName, envVarName)); console.log(`Created drizzle.config for: ${safeName}`); } // Add encrypted secret to GitHub await octokit.request(`PUT /repos/${repoOwner}/${repoName}/actions/secrets/${envVarName}`, { owner: repoOwner, repo: repoName, secret_name: envVarName, encrypted_value: encryptedValue, key_id: publicKeyData.key_id, headers: { 'X-GitHub-Api-Version': '2022-11-28' }, }); // Generate migrations using drizzle-kit execSync(`drizzle-kit generate --config=${path}/${file}`, { encoding: 'utf-8' }); console.log(`Ran drizzle-kit generate for: ${safeName}`); }) ); // Ensure GitHub Actions workflow directories exist if (!existsSync('.github')) mkdirSync('.github'); if (!existsSync('.github/workflows')) mkdirSync('.github/workflows'); // Generate GitHub workflow file const workflow = githubWorkflow(secrets); writeFileSync(`.github/workflows/run-migrations.yml`, workflow); console.log('GitHub Actions workflow created.'); })(); ``` The script above goes through these steps: 1. Ensures the `configs` directory exists, creating it if necessary. 2. Retrieves the GitHub public key for encrypting secrets. 3. Lists all projects in your Neon account. 4. For each project: - Retrieves the connection URI from Neon. - Sanitizes project names for safe usage in directory names and environment variables. - Creates DrizzleORM config files. - Encrypts secrets and adds them to the GitHub repository. - Generates migrations using `drizzle-kit`. 5. Finally, it generates GitHub Actions workflow that includes all generated environment variables for running migrations. 6. To run the script, use the following command: ```bash npm run generate ``` Ensure the following environment variables are set: - `NEON_API_KEY`: Your Neon API key. - `PERSONAL_ACCESS_TOKEN`: Your GitHub personal access token. And update `repoOwner` and `repoName` to match your repository details. Here's an example output for the Drizzle configuration: ```javascript // src/configs/acme-corp/drizzle.config.ts import 'dotenv/config'; import { defineConfig } from 'drizzle-kit'; export default defineConfig({ out: './drizzle/acme-corp', schema: './src/db/schema.ts', dialect: 'postgresql', dbCredentials: { url: process.env.ACME_CORP_DATABASE_URL!, }, }); ``` And for the GitHub workflow: ```yaml // .github/workflows/run-migrations.yml name: Migrate changes on: pull_request: types: [closed] branches: - main workflow_dispatch: env: TALENT_BIZ_DATABASE_URL: ${{ secrets.TALENT_BIZ_DATABASE_URL }} PAYROLL_INC_DATABASE_URL: ${{ secrets.PAYROLL_INC_DATABASE_URL }} ACME_CORP_DATABASE_URL: ${{ secrets.ACME_CORP_DATABASE_URL }} FINANCE_CO_DATABASE_URL: ${{ secrets.FINANCE_CO_DATABASE_URL }} jobs: migrate: runs-on: ubuntu-latest if: github.event.pull_request.merged == true steps: - name: Checkout repository uses: actions/checkout@v4 - name: Set up Node.js uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Run migration script run: node src/scripts/migrate.js ``` #### Running migrations Now, we're ready to run migrations: ```javascript // src/scripts/migrate.js import { readdirSync, existsSync } from 'fs'; import path from 'path'; import { execSync } from 'child_process'; import { fileURLToPath } from 'url'; (async () => { const configDir = path.resolve(path.dirname(fileURLToPath(import.meta.url)), '../../configs'); if (existsSync(configDir)) { const customers = readdirSync(configDir); const configPaths = customers .map((customer) => path.join(configDir, customer, 'drizzle.config.ts')) .filter((filePath) => existsSync(filePath)); console.log('Found drizzle.config.ts files:', configPaths); configPaths.forEach((configPath) => { console.log(`Running drizzle-kit for: ${configPath}`); execSync(`npx drizzle-kit migrate --config=${configPath}`, { encoding: 'utf-8' }); }); } else { console.log('The configs directory does not exist.'); } })(); ``` This script: - Only runs after a pull request (PR) has been merged. It reads through the configs directory and applies the migrations defined in each `drizzle.config.ts` file for every project or customer, ensuring that all databases are using the same schema. - Uses `npx` to run the `drizzle-kit migrate` command against each `drizzle.config.ts` file, ensuring that the schema is applied to all databases. The source code for this migration script is located at: `src/scripts/migrate.js`. This approach automatically includes any new projects or customers added to the system, as well as schema changes that need to be applied across all databases. ### Summary Here's an overview of the workflow: - We used a script to automate the creation of DrizzleORM configuration files (`drizzle.config.ts`) and securely store database connection strings as GitHub secrets. - We used a migration script to iterate through the configs directory and apply schema changes to all databases via `drizzle-kit migrate`. - The GitHub Actions workflow triggers migrations automatically when a PR is merged. Environment variables for each project are explicitly injected into the workflow, giving DrizzleORM access to the connection strings needed for schema updates. ## Backing up Projects to Your Own S3 As a managed database, Neon already takes care of securing your data, always keeping a full copy of your dataset in object storage. But if your use case or company demands that you also keep a copy of your data in your own S3, this section covers how to automate the process via a scheduled GitHub Action. A more extensive explanation can be found in this two-part series: [Part 1](https://neon.com/docs/manage/backups-aws-s3-backup-part-1), [Part 2](https://neon.com/docs/manage/backups-aws-s3-backup-part-2). ### AWS IAM configuration First, GitHub must be added as an identity provider to allow the Action to use your AWS credentials. To create a new Identity Provider, navigate to IAM > Access Management > Identity Providers, and click Add provider. On the next screen select OpenID Connect and add the following to the Provider URL and Audience fields. 1. Provider URL: https://token.actions.githubusercontent.com 2. Audience: `sts.amazonaws.com` When you're done, click **Add Provider**. You should now see this provider is visible in the list under **IAM > Access Management > Identity Providers**. Now, you must create a role, which is an identity that you can assume to obtain temporary security credentials for specific tasks or actions within AWS. Navigate to **IAM > Access Management > Roles**, and click **Create role**. On the next screen you can create a Trusted Identity for the Role. Select **Trusted Identity**. On the next screen, select **Web Identity**, then select `token.actions.githubusercontent.com` from the **Identity Provider** dropdown menu. Once you select the Identity Provider, you'll be shown a number of fields to fill out. Select `sts.amazonaws.com` from the **Audience** dropdown menu, then fill out the GitHub repository details as per your requirements. When you're ready, click **Next**. For reference, the options shown in the image below are for this repository. You can skip selecting anything from the Add Permissions screen and click **Next** to continue. On this screen give the **Role** a name and description. You'll use the Role name in the code for the GitHub Action. When you're ready click **Create role**. ### S3 bucket policy This section assumes you already have an S3 bucket. If you need instructions on how to create a bucket, refer to the [create an S3 bucket](https://neon.com/docs/manage/backups-aws-s3-backup-part-1) documentation. To ensure the Role being used in the GitHub Action can perform actions on the S3 bucket, you'll need to update the bucket policy. Select your bucket then select the Permissions tab and click **Edit**. You can now add the following policy which grants the Role you created earlier access to perform S3 List, Get, Put and Delete actions. Replace the Role name (`neon-multiple-db-s3-backups-github-action`) with your Role name and replace the S3 bucket name (`neon-multiple-db-s3-backups`) with your S3 bucket name. ```yaml { 'Version': '2012-10-17', 'Statement': [ { 'Effect': 'Allow', 'Principal': { 'AWS': 'arn:aws:iam::627917386332:role/neon-multiple-db-s3-backups-github-action' }, 'Action': ['s3:ListBucket', 's3:GetObject', 's3:PutObject', 's3:DeleteObject'], 'Resource': [ 'arn:aws:s3:::neon-multiple-db-s3-backups', 'arn:aws:s3:::neon-multiple-db-s3-backups/*', ], }, ], } ``` When you're ready click **Save** changes. ### GitHub secrets Create the following GitHub Secrets to hold various values that you likely won't want to expose or repeat in code: - `AWS_ACCOUNT_ID`: This can be found by clicking on your user name in the AWS console. - `S3_BUCKET_NAME`: In my case, this would be, neon-multiple-db-s3-backups - `IAM_ROLE`: In my case this would be, neon-multiple-db-s3-backups-github-action ### Scheduled pg_dump/restore GitHub Action Before diving into the code, here's a look at this example in the Neon console dashboard. There are three databases set up for three fictional customers, all running Postgres 16 and all are deployed to us-east-1. We will be backing up each database into its own folder within an S3 bucket, with different schedules and retention periods. All the code in this example lives [in this repository](https://github.com/neondatabase-labs/neon-multiple-db-s3-backups). Using the same naming conventions, there are three new files in the `.github/workflows` folder in the repository: 1. `paycorp-payments-prod.yml` 2. `acme-analytics-prod.yml` 3. `paycorp-payments-prod.yml` All the Actions are technically the same, (besides the name of the file), but there are several areas where they differ. These are: 1. The workflow name 2. The `DATABASE_URL` 3. The `RETENTION` period For example, in the first `.yml` file, the workflow name is `acme-analytics-prod`, the `DATABASE_URL` points to `secrets.ACME_ANALYTICS_PROD`, and the `RETENTION` period is 7 days. Here's the full Action, and below the code snippet, we'll explain how it all works. ```yaml // .github/workflows/acme-analytics-prod.yml name: acme-analytics-prod on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: jobs: db-backup: runs-on: ubuntu-latest permissions: id-token: write env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_ANALYTICS_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '16' steps: - name: Install PostgreSQL run: | sudo apt install -y postgresql-common yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV - name: Create folder if it doesn't exist run: | if ! aws s3api head-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" 2>/dev/null; then aws s3api put-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" fi - name: Run pg_dump run: | /usr/lib/postgresql/${{ env.PG_VERSION }}/bin/pg_dump ${{ env.DATABASE_URL }} | gzip > "${{ env.GZIP_NAME }}" - name: Empty bucket of old files run: | THRESHOLD_DATE=$(date -d "-${{ env.RETENTION }} days" +%Y-%m-%dT%H:%M:%SZ) aws s3api list-objects --bucket ${{ env.S3_BUCKET_NAME }} --prefix "${{ env.FOLDER_NAME }}/" --query "Contents[?LastModified<'${THRESHOLD_DATE}'] | [?ends_with(Key, '.gz')].{Key: Key}" --output text | while read -r file; do aws s3 rm "s3://${{ env.S3_BUCKET_NAME }}/${file}" done - name: Upload to bucket run: | aws s3 cp "${{ env.GZIP_NAME }}" "${{ env.UPLOAD_PATH }}" --region ${{ env.AWS_REGION }} ``` Starting from the top, there are a few configuration options: #### Action configuration ```yaml name: acme-analytics-prod on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: ``` - `name`: This is the workflow name and will also be used when creating the folder in the S3 bucket. - `cron`: This determines how often the Action will run, take a look a the GitHub docs where the [POSIX cron syntax](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule) is explained. #### Environment variables ```yaml env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_ANALYTICS_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '16' ``` - `RETENTION`: This determines how long a backup file should remain in the S3 bucket before it's deleted. - `DATABASE_URL`: This is the Neon Postgres connection string for the database you're backing up. - `IAM_ROLE`: This is the name of the AWS IAM Role. - `AWS_ACCOUNT_ID`: This is your AWS Account ID. - `S3_BUCKET_NAME`: This is the name of the S3 bucket where all backups are being stored. - `AWS_REGION`: This is the region where the S3 bucket is deployed. - `PG_VERSION`: This is the version of Postgres to install. #### GitHub Secrets As we mentioned above, several of the above environment variables are defined using secrets. These variables can be added to **Settings > Secrets and variables > Actions**. Here's a screenshot of the GitHub repository secrets including the connection string for the fictional ACME Analytics Prod database. #### Action steps This step installs Postgres into the GitHub Action's virtual environment. The version to install is defined by the `PG_VERSION` environment variable. **Install Postgres** ```yaml - name: Install PostgreSQL run: | sudo apt install -y postgresql-common yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} ``` **Configure AWS credentials** This step configures AWS credentials within the GitHub Action virtual environment, allowing the workflow to interact with AWS services securely. ```yaml - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} ``` **Set file, folder and path variables** In this step I've created three variables that are all output to `GITHUB_ENV`. This allows me to access the values from other steps in the Action. ```yaml - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV ``` The three variables are as follows: 1. `GZIP_NAME`: The name of the `.gz` file derived from the date which would produce a file name similar to, `October-21-2024@07:53:02.gz` 2. `FOLDER_NAME`: The folder where the `.gz` files are to be uploaded 3. `UPLOAD_PATH`: This is the full path that includes the S3 bucket name, folder name and `.gz` file **Create folder if it doesn't exist** This step creates a new folder (if one doesn't already exist) inside the S3 bucket using the `FOLDER_NAME` as defined in the previous step. ## Final remarks You can create as many of these Actions as you need. Just be careful to double check the `DATABASE_URL` to avoid backing up a database to the wrong folder. **Important**: GitHub Actions will timeout after ~6 hours. The size of your database and how you've configured it will determine how long the `pg_dump` step takes. If you do experience timeout issues, you can self host [GitHub Action runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). --- # Source: https://neon.com/llms/guides-datadog.txt # Datadog integration > This document details the integration of Neon with Datadog, outlining the steps to configure and monitor Neon's database performance metrics within the Datadog platform. ## Source - [Datadog integration HTML](https://neon.com/docs/guides/datadog): The original HTML version of this documentation What you will learn: - How to set up the integration - How to configure log forwarding - The full list of externally-available metrics External docs: - [Datadog API and Application Keys](https://docs.datadoghq.com/account_management/api-app-keys/) - [Identify Datadog site](https://docs.datadoghq.com/getting_started/site/#access-the-datadog-site/) Available for Scale plan users, the Neon Datadog integration lets you monitor Neon database performance, resource utilization, and system health directly from Datadog's observability platform. ## How it works The integration enables secure, reliable export of Neon metrics and Postgres logs to Datadog. By configuring the integration with your Datadog API key, Neon automatically sends data from your project to your selected Datadog site. **Note**: Data is sent for all computes in your Neon project. For example, if you have multiple branches, each with an attached compute, both metrics and logs will be collected and sent for each compute. ### Neon metrics The integration exports [a comprehensive set of metrics](https://neon.com/docs/guides/datadog#available-metrics) including: - **Connection counts** — Tracks active and idle database connections. - **Database size** — Monitors total size of all databases in bytes. - **Replication delay** — Measures replication lag in bytes and seconds. - **Compute metrics** — Includes CPU and memory usage statistics for your compute. ### Postgres logs **Note** Beta: **Postgres logs export** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). The Neon Datadog integration can forward Postgres logs to your Datadog account. These logs provide visibility into database activity, errors, and performance. See [Export Postgres logs to Datadog](https://neon.com/docs/guides/datadog#export-postgres-logs-to-datadog) for details. ## Prerequisites Before getting started, ensure the following: - You have a Neon account and project. If not, see [Sign up for a Neon account](https://neon.com/docs/get-started/signing-up). - You have a Datadog account and API key. - You know the region you selected for your Datadog account. Here's how to check: [Find your Datadog region](https://docs.datadoghq.com/getting_started/site/#access-the-datadog-site) ## Steps to integrate Datadog with Neon 1. In the Neon Console, navigate to the **Integrations** page in your Neon project. 1. Locate the **Datadog** card and click **Add**. 1. Enter your **Datadog API key**. You can generate or retrieve [Datadog API Keys](https://app.datadoghq.com/organization-settings/api-keys) from your Datadog organization. For instructions, see [Datadog API and Application Keys](https://docs.datadoghq.com/account_management/api-app-keys/). 1. Select the Datadog **site** that you used when setting up your Datadog account. 1. Select what you want to export. You can enable either or both: - **Metrics**: System metrics and database statistics (CPU, memory, connections, etc.) - **Postgres logs**: Error messages, warnings, connection events, and system notifications 1. Click **Confirm** to complete the integration. **Note**: You can change these settings later by editing your integration configuration. Optionally, you can import the Neon-provided JSON configuration file into Datadog, which creates a pre-built dashboard from **Neon metrics**, similar to the charts available on our Monitoring page. See [Import Neon dashboard](https://neon.com/docs/guides/datadog#import-neon-dashboard). > We do not yet provide a pre-built dashboard for **Postgres logs**, but it's coming soon. Once the integration is set up, Neon will start sending Neon metrics to Datadog, and you can use these metrics to create custom dashboards and alerts in Datadog. **Note**: Neon computes only send logs and metrics when they are active. If the [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature is enabled and a compute is suspended due to inactivity, no logs or metrics will be sent during the suspension. This may result in gaps in your Neon logs and metrics in Datadog. If you notice missing data in Datadog, check if your compute is suspended. You can verify a compute's status as `Idle` or `Active` on the **Branches** page in the Neon console, and review **Suspend compute** events on the **System operations** tab of the **Monitoring** page. Additionally, if you are setting up Neon's Datadog integration for a project with an inactive compute, you'll need to activate the compute before it can send metrics and logs to Datadog. To activate it, simply run a query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any connected client on the branch associated with the compute. ## Example usage in Datadog Once integrated, you can create custom dashboards in Datadog by querying the metrics sent from Neon. Use Datadog's **Metrics Explorer** to search for metrics like `neon_connection_counts`, `neon_db_total_size`, and `host_cpu_seconds_total`. You can also set alerts based on threshold values for critical metrics. ## Import the Neon dashboard As part of the integration, Neon provides a JSON configuration file that you can import into Datadog to start with a pre-built dashboard based on a subset of Neon metrics. Here's how you can import the dashboard: 1. In the Neon Console, open your Datadog integration from the **Integrations** page. 1. Scroll to the bottom of the panel and copy the JSON from there. OR You can copy the [JSON below](https://neon.com/docs/guides/datadog#dashboard-json) instead. 1. Next, create a new dashboard in Datadog. 1. Open **Configure**, select **Import dashboard JSON**, then paste the Neon-provided configuration JSON. If any of the computes in your project are active, you should start seeing data in the resulting charts right away. By default, the charts show metrics for all active endpoints in your project. You can filter results to one or more selected endpoints using the **endpoint_id** variable dropdown selector. ### Dashboard JSON Details: Copy JSON configuration ```json { "title": "Single Neon Compute metrics (with dropdown)", "description": "", "widgets": [ { "id": 3831219857468963, "definition": { "title": "RAM", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "time": {}, "type": "timeseries", "requests": [ { "formulas": [ { "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "alias": "Cached", "formula": "query3" }, { "alias": "Used", "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "formula": "query1 - query2" } ], "queries": [ { "name": "query3", "data_source": "metrics", "query": "max:host_memory_cached_bytes{$endpoint_id}" }, { "name": "query1", "data_source": "metrics", "query": "max:host_memory_total_bytes{$endpoint_id}" }, { "name": "query2", "data_source": "metrics", "query": "max:host_memory_available_bytes{$endpoint_id}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 0, "y": 0, "width": 6, "height": 2 } }, { "id": 7296782684811837, "definition": { "title": "CPU", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "time": {}, "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Used", "formula": "per_minute(query1)" } ], "queries": [ { "name": "query1", "data_source": "metrics", "query": "max:host_cpu_seconds_total{!mode:idle,$endpoint_id}.as_rate()" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 6, "y": 0, "width": 6, "height": 2 } }, { "id": 7513607855022102, "definition": { "title": "Connections", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Total", "formula": "query1" }, { "alias": "Active", "formula": "query2" }, { "alias": "Idle", "formula": "query3" } ], "queries": [ { "name": "query1", "data_source": "metrics", "query": "sum:neon_connection_counts{!datname:postgres,$endpoint_id}" }, { "name": "query2", "data_source": "metrics", "query": "sum:neon_connection_counts{!datname:postgres,state:active ,$endpoint_id}" }, { "name": "query3", "data_source": "metrics", "query": "sum:neon_connection_counts{!datname:postgres,!state:active,$endpoint_id}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 0, "y": 2, "width": 6, "height": 3 } }, { "id": 5523349536895199, "definition": { "title": "Database size", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "type": "timeseries", "requests": [ { "formulas": [ { "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "formula": "query2" }, { "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "alias": "Size of all databases", "formula": "query3" }, { "alias": "Max size", "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "formula": "query1 * 1024 * 1024" } ], "queries": [ { "name": "query2", "data_source": "metrics", "query": "max:neon_pg_stats_userdb{kind:db_size,$endpoint_id} by {datname}" }, { "name": "query3", "data_source": "metrics", "query": "max:neon_db_total_size{$endpoint_id}" }, { "name": "query1", "data_source": "metrics", "query": "max:neon_max_cluster_size{$endpoint_id}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ], "yaxis": { "include_zero": false, "scale": "log" } }, "layout": { "x": 6, "y": 2, "width": 6, "height": 3 } }, { "id": 1608572645458648, "definition": { "title": "Deadlocks", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Deadlocks", "formula": "query1" } ], "queries": [ { "name": "query1", "data_source": "metrics", "query": "max:neon_pg_stats_userdb{kind:deadlocks,$endpoint_id} by {datname}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 0, "y": 5, "width": 6, "height": 2 } }, { "id": 5728659221127513, "definition": { "title": "Changed rows", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Rows inserted", "formula": "diff(query1)" }, { "alias": "Rows deleted", "formula": "diff(query2)" }, { "alias": "Rows updated", "formula": "diff(query3)" } ], "queries": [ { "name": "query1", "data_source": "metrics", "query": "max:neon_pg_stats_userdb{kind:inserted,$endpoint_id}" }, { "name": "query2", "data_source": "metrics", "query": "max:neon_pg_stats_userdb{kind:deleted,$endpoint_id}" }, { "name": "query3", "data_source": "metrics", "query": "max:neon_pg_stats_userdb{kind:updated,$endpoint_id}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 6, "y": 5, "width": 6, "height": 2 } }, { "id": 630770240665422, "definition": { "title": "Local file cache hit rate", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "time": {}, "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Cache hit rate", "formula": "query1 / (query1 + query2)", "number_format": { "unit": { "type": "canonical_unit", "unit_name": "fraction" } } } ], "queries": [ { "name": "query1", "data_source": "metrics", "query": "max:neon_lfc_hits{$endpoint_id}" }, { "name": "query2", "data_source": "metrics", "query": "max:neon_lfc_misses{$endpoint_id}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 0, "y": 7, "width": 6, "height": 3 } }, { "id": 2040733022455075, "definition": { "title": "Working set size", "title_size": "16", "title_align": "left", "show_legend": true, "legend_layout": "auto", "legend_columns": [ "avg", "min", "max", "value", "sum" ], "time": {}, "type": "timeseries", "requests": [ { "formulas": [ { "alias": "Local file cache size", "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "formula": "query2" }, { "number_format": { "unit": { "type": "canonical_unit", "unit_name": "byte" } }, "formula": "8192 * query1" } ], "queries": [ { "name": "query2", "data_source": "metrics", "query": "max:neon_lfc_cache_size_limit{$endpoint_id}" }, { "name": "query1", "data_source": "metrics", "query": "max:neon_lfc_approximate_working_set_size_windows{$endpoint_id} by {duration}" } ], "response_format": "timeseries", "style": { "palette": "dog_classic", "order_by": "values", "line_type": "solid", "line_width": "normal" }, "display_type": "line" } ] }, "layout": { "x": 6, "y": 7, "width": 6, "height": 3 } } ], "template_variables": [ { "name": "endpoint_id", "prefix": "endpoint_id", "available_values": [], "default": "*" }, { "name": "project_id", "prefix": "project_id", "available_values": [], "default": "*" }, { "name": "state", "prefix": "state", "available_values": [], "default": "*" } ], "layout_type": "ordered", "notify_list": [], "reflow_type": "fixed" } ``` ## Available metrics Neon exports a comprehensive set of metrics including connection counts, database size, replication delay, and compute metrics (CPU and memory usage). For a complete list of all available metrics with detailed descriptions, see the [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs). ## Export Postgres logs to Datadog You can export your Postgres logs from your Neon compute to your Datadog account. These logs provide visibility into database activity, errors, and performance. For detailed information about log fields and technical considerations, see the [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs). ### Performance impact Enabling this feature may result in: - An increase in compute resource usage for log processing - Additional network egress for log transmission, which is billed after 100 GB on paid plans - Associated costs based on log volume in Datadog ## Feedback and future improvements We're always looking to improve! If you have feature requests or feedback, please let us know via the [Feedback form](https://console.neon.tech/app/projects?modal=feedback) in the Neon Console or on our [Discord channel](https://discord.com/channels/1176467419317940276/1176788564890112042). --- # Source: https://neon.com/llms/guides-deno.txt # Use Neon with Deno Deploy > The document outlines the steps for integrating Neon with Deno Deploy, detailing how to configure and deploy a Neon database within a Deno application environment. ## Source - [Use Neon with Deno Deploy HTML](https://neon.com/docs/guides/deno): The original HTML version of this documentation [Deno Deploy](https://deno.com/deploy) is a scalable serverless platform for running JavaScript, TypeScript, and WebAssembly at the edge, designed by the creators of Deno. It simplifies the deployment process and offers automatic scaling, zero-downtime deployments, and global distribution. This guide demonstrates how to connect to a Neon Postgres database from a simple Deno application using the [Neon serverless driver](https://jsr.io/@neon/serverless) on [JSR](https://jsr.io/). The guide covers two deployment options: - [Deploying your application locally with Deno Runtime](https://neon.com/docs/guides/deno#deploy-your-application-locally-with-deno-runtime) - [Deploying your application with the Deno Deploy serverless platform](https://neon.com/docs/guides/deno#deploy-your-application-with-deno-deploy) ## Prerequisites To follow the instructions in this guide, you will need: - A Neon project. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - To use the Deno Deploy serverless platform, you require a Deno Deploy account. Visit [Deno Deploy](https://deno.com/deploy) to sign up or log in. ## Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Your connection string should look something like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` You'll need the connection string a little later in the setup. ## Deploy your application locally with Deno Runtime Deno Runtime is an open-source runtime for TypeScript and JavaScript. The following instructions describe how to deploy an example application locally using Deno Runtime. ### Install the Deno Runtime and deployctl Follow the [Install Deno and deployctl](https://docs.deno.com/deploy/manual/#install-deno-and-deployctl) instructions in the Deno documentation to install the Deno runtime and `deployctl` command-line utility on your local machine. ### Set up the Neon serverless driver First, install the Neon serverless driver using the `deno add` command: ```bash deno add jsr:@neon/serverless ``` **Note**: You can also use npm to install the Neon serverless driver ```bash npx jsr add @neon/serverless ``` This will create or update your `deno.json` file with the necessary dependency: ```json { "imports": { "@neon/serverless": "jsr:@neon/serverless@^0.10.1" } } ``` ### Create the example application Next, create the `server.ts` script on your local machine. ```ts // server.ts import { neon } from '@neon/serverless'; const databaseUrl = Deno.env.get('DATABASE_URL')!; const sql = neon(databaseUrl); // Create the books table and insert initial data if it doesn't exist await sql` CREATE TABLE IF NOT EXISTS books ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, author TEXT NOT NULL ) `; // Check if the table is empty const { count } = await sql`SELECT COUNT(*)::INT as count FROM books`.then((rows) => rows[0]); if (count === 0) { // The table is empty, insert the book records await sql` INSERT INTO books (title, author) VALUES ('The Hobbit', 'J. R. R. Tolkien'), ('Harry Potter and the Philosopher''s Stone', 'J. K. Rowling'), ('The Little Prince', 'Antoine de Saint-Exupéry') `; } // Start the server Deno.serve(async (req) => { const url = new URL(req.url); if (url.pathname !== '/books') { return new Response('Not Found', { status: 404 }); } try { switch (req.method) { case 'GET': { const books = await sql`SELECT * FROM books`; return new Response(JSON.stringify(books, null, 2), { headers: { 'content-type': 'application/json' }, }); } default: return new Response('Method Not Allowed', { status: 405 }); } } catch (err) { console.error(err); return new Response(`Internal Server Error\n\n${err.message}`, { status: 500, }); } }); ``` The script creates a table named `books` in the `neondb` database if it does not exist and inserts some data into it. It then starts a server that listens for requests on the `/books` endpoint. When a request is received, the script returns data from the `books` table. ### Run the script locally To run the script locally, set the `DATABASE_URL` environment variable to the Neon connection string you copied earlier. ```bash export DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` Then, run the command below to start the app server. The `--allow-env` flag allows the script to access the environment variables, and the `--allow-net` flag allows the script to make network requests. If the Deno runtime prompts you to allow these permissions, enter `y` to continue. ```bash deno run --allow-env --allow-net server.ts ``` ### Query the endpoint You can request the `/books` endpoint with a `cURL` command to view the data returned by the script: ```bash curl http://localhost:8000/books ``` The `cURL` command should return the following data: ```json [ { "id": 1, "title": "The Hobbit", "author": "J. R. R. Tolkien" }, { "id": 2, "title": "Harry Potter and the Philosopher's Stone", "author": "J. K. Rowling" }, { "id": 3, "title": "The Little Prince", "author": "Antoine de Saint-Exupéry" } ] ``` ## Deploy your application with Deno Deploy Deno Deploy is a globally distributed platform for serverless JavaScript applications. Your code runs on managed servers geographically close to your users, enabling low latency and faster response times. Deno Deploy applications run on light-weight V8 isolates powered by the Deno runtime. ### Set up the project 1. If you have not done so already, install the `deployctl` command-line utility, as described [above](https://neon.com/docs/guides/deno#install-the-deno-runtime-and-deployctl). 1. If you have not done so already, create the example `server.ts` application on your local machine, as described [above](https://neon.com/docs/guides/deno#create-the-example-application). 1. Register or log in to [Deno](https://deno.com/) and navigate to the [Create a project](https://dash.deno.com/new) page, where you can select a project template for your preferred framework, link a code repo, or create an empty project. 1. The example application in this guide is a simple Deno script you've created locally, so let's select the **Create an empty project** option. Note the name of your Deno Deploy project. You will need it in a later step. Projects are given a generated Heroku-style name, which looks something like this: `cloudy-otter-57`. 1. Click the `Settings` button and add a `DATABASE_URL` environment variable. Set the value to your Neon connection string and click **Save**. 1. To authenticate `deployctl` from the terminal, you will need an access token for your Deno Deploy account. Navigate back to your [Deno dashboard](https://dash.deno.com/account#access-tokens) and create a new access token. Copy the token value and set the `DENO_DEPLOY_TOKEN` environment variable on your local machine by running this command from your terminal: ```bash export DENO_DEPLOY_TOKEN=YOUR_ACCESS_TOKEN ``` ### Deploy using deployctl To deploy the application, navigate to the directory of your `server.ts` application, and run the following command: ```bash deployctl deploy --project=YOUR_DENO_DEPLOY_PROJECT_NAME --prod server.ts ``` The `--prod` flag specifies that the application should be deployed to the production environment. The `deployctl` command deploys the application to the Deno Deploy serverless platform. Once the deployment is complete, you'll see a message similar to the following: ```bash $ deployctl deploy --project=cloudy-otter-57 --prod server.ts ✔ Deploying to project cloudy-otter-57. ℹ The project does not have a deployment yet. Automatically pushing initial deployment to production (use --prod for further updates). ✔ Entrypoint: /home/ubuntu/neon-deno/server.ts ℹ Uploading all files from the current dir (/home/ubuntu/neon-deno) ✔ Found 1 asset. ✔ Uploaded 1 new asset. ✔ Production deployment complete. ✔ Created config file 'deno.json'. View at: - https://cloudy-otter-57-8csne31fymac.deno.dev - https://cloudy-otter-57.deno.dev ``` ### Verifying the deployment You can now access the application at the URL specified in the output. You can verify its connection to your Neon database by visiting the `/books` endpoint in your browser or using `cURL` to see if the data is returned as expected. ```bash $ curl https://cloudy-otter-57.deno.dev/books [ { "id": 1, "title": "The Hobbit", "author": "J. R. R. Tolkien" }, { "id": 2, "title": "Harry Potter and the Philosopher's Stone", "author": "J. K. Rowling" }, { "id": 3, "title": "The Little Prince", "author": "Antoine de Saint-Exupéry" } ] ``` To check the health of the deployment or modify settings, navigate to the [Project Overview](https://dash.deno.com/account/projects) page and select your project from the **Projects** list. ### Deploying using GitHub When deploying a more complex Deno application, with custom build steps, you can use Deno's GitHub integration. The integration lets you link a Deno Deploy project to a GitHub repository. For more information, see [Deploying with GitHub](https://docs.deno.com/deploy/manual/how-to-deploy). ## Removing the example application and Neon project To delete the example application on Deno Deploy, follow these steps: 1. From the Deno Deploy [dashboard](https://dash.deno.com/account/projects), select your **Project**. 1. Select the **Settings** tab. 1. In the **Danger Zone** section, click **Delete** and follow the instructions. To delete your Neon project, refer to [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon with Deno Deploy](https://github.com/neondatabase/examples/tree/main/deploy-with-deno): Connect a Neon Postgres database to your Deno Deploy application ## Resources - [Deno Deploy](https://deno.com/deploy) - [Deno Runtime Quickstart](https://docs.deno.com/runtime/manual) - [Deno Deploy Quickstart](https://docs.deno.com/deploy/manual/) - [Neon Serverless Driver](https://jsr.io/@neon/serverless) - [JSR](https://jsr.io/) --- # Source: https://neon.com/llms/guides-django-migrations.txt # Schema migration with Neon Postgres and Django > This document guides Neon users through performing schema migrations in a PostgreSQL database using Django, detailing the steps necessary to integrate and manage database changes effectively within the Neon environment. ## Source - [Schema migration with Neon Postgres and Django HTML](https://neon.com/docs/guides/django-migrations): The original HTML version of this documentation [Django](https://www.djangoproject.com/) is a high-level Python framework to make database-driven web applications. It provides an ORM (Object-Relational Mapping) layer that abstracts database operations, making it easy to interact with databases using Python code. Django also includes a powerful migration system that allows you to define and manage database schema changes over time. This guide demonstrates how to use Django with a Neon Postgres database. We'll create a simple Django application and walk through the process of setting up the database, defining models, and generating and running migrations to manage schema changes. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Python](https://www.python.org/) installed on your local machine. We recommend using a newer version of Python, 3.8 or higher. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the `New Project` button to create a new one. ### Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. **Note**: Neon supports both direct and pooled database connection strings, which you can find by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting up the Django project ### Set up the Python environment To manage our Django project dependencies, we create a new Python virtual environment. Run the following commands in your terminal to set it up. ```bash python -m venv myenv ``` Activate the virtual environment by running the following command: ```bash # On macOS and Linux source myenv/bin/activate # On Windows myenv\Scripts\activate ``` With the virtual environment activated, we can create a new directory for our Django project and install the required packages: ```bash mkdir guide-neon-django && cd guide-neon-django pip install Django "psycopg2-binary" pip install python-dotenv dj-database-url pip freeze > requirements.txt ``` We installed Django and the `psycopg2-binary` package to connect to the Neon Postgres database. We also added the `python-dotenv` to read environment variables easily, and the `dj-database-url` package to parse the Neon connection string into Django settings. We also saved the installed packages to a `requirements.txt` file so the project can be easily recreated in another environment. ### Create a new Django project Run the following command to create a new Django project in the current directory: ```bash django-admin startproject guide_neon_django . ``` This command creates a new Django project named `guide_neon_django` in the current directory. ### Set up the Database configuration Create a `.env` file in the project root directory and add the `DATABASE_URL` environment variable to it. Use the connection string that you obtained from the Neon Console earlier. ```bash # .env DATABASE_URL=NEON_POSTGRES_CONNECTION_STRING ``` For Django to read the environment variables from the `.env` file, open the `settings.py` file located in the `guide_neon_django` directory and add the following code, updating the `DATABASES` setting: ```python # settings.py import os import dotenv import dj_database_url dotenv.load_dotenv("../.env") DATABASES = { "default": dj_database_url.parse( url=os.getenv("DATABASE_URL", ""), conn_max_age=600, conn_health_checks=True ) } ``` ### Create a new Django app Inside your project directory, run the following command to create a new Django app: ```bash python manage.py startapp catalog ``` This command creates a new app named `catalog` inside the Django project. ## Defining data models and running migrations ### Specify the data model Now, open the `models.py` file in your `catalog` app directory and define the database models for your application: ```python # catalog/models.py from django.db import models class Author(models.Model): name = models.CharField(max_length=100) bio = models.TextField(blank=True) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.name class Book(models.Model): title = models.CharField(max_length=200) author = models.ForeignKey(Author, on_delete=models.CASCADE) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.title ``` This code defines two models: `Author` and `Book`. The `Author` model represents an author with fields for `name`, `bio`, and a `created_at` timestamp. The `Book` model represents a book with fields for `title`, `author` (as a foreign key to the `Author` model), and a `created_at` timestamp. Django automatically creates an `id` field for each model as the primary key. ### Generate migration files We first add the new application `catalog` to the list of installed apps for the Django project. Open the `settings.py` file in the `guide_neon_django` directory and add the `catalog` app to the `INSTALLED_APPS` setting: ```python # settings.py INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "catalog" ] ``` To generate migration files based on the defined models, run the following command: ```bash python manage.py makemigrations ``` This command detects the new `Author` and `Book` models that were added and generates migration files in the `catalog/migrations` directory. ### Apply the migration Now, to apply the migration and create the corresponding tables in the Neon Postgres database, run the following command: ```bash python manage.py migrate ``` This command executes the migration files and creates the necessary tables in the database. Note that Django creates multiple other tables, such as `django_migrations` and `auth_user` for its internal usage. ### Seed the database To populate the database with some initial data, we can create a custom management command for our app. Create a new file named `populate.py` in the `catalog/management/commands` directory. ```bash mkdir -p catalog/management/commands touch catalog/management/commands/populate.py ``` Now, add the following code to the `populate.py` file to create some authors and books: ```python from django.core.management.base import BaseCommand from catalog.models import Author, Book class Command(BaseCommand): help = 'Seeds the database with sample authors and books' def handle(self, *args, **options): # Create authors authors = [ Author( name="J.R.R. Tolkien", bio="The creator of Middle-earth and author of The Lord of the Rings." ), Author( name="George R.R. Martin", bio="The author of the epic fantasy series A Song of Ice and Fire." ), Author( name="J.K. Rowling", bio="The creator of the Harry Potter series." ), ] Author.objects.bulk_create(authors) # Create books books = [ Book(title="The Fellowship of the Ring", author=authors[0]), Book(title="The Two Towers", author=authors[0]), Book(title="The Return of the King", author=authors[0]), Book(title="A Game of Thrones", author=authors[1]), Book(title="A Clash of Kings", author=authors[1]), Book(title="Harry Potter and the Philosopher's Stone", author=authors[2]), Book(title="Harry Potter and the Chamber of Secrets", author=authors[2]), ] Book.objects.bulk_create(books) self.stdout.write(self.style.SUCCESS('Successfully seeded the database.')) ``` Now, run the custom management command in your terminal and seed the database: ```bash python manage.py populate ``` ## Implement the application ### Create views to display data We can now create views to display the authors and books in our application. Create a file `views.py` in the `catalog` app directory and add the following code: ```python # catalog/views.py from django.http import JsonResponse from django.core import serializers from .models import Author, Book def list_authors(request): authors = Author.objects.all() data = [serializers.serialize('json', authors)] return JsonResponse(data, safe=False) def list_books_by_author(request, author_id): books = Book.objects.filter(author_id=author_id) data = [serializers.serialize('json', books)] return JsonResponse(data, safe=False) ``` We defined two views: `list_authors` to list all authors and `list_books_by_author` to list books by a specific author. The views return JSON responses with the serialized data. ### Define URLs for the views Next, create a file `urls.py` in the `catalog` app directory and add the following code: ```python # catalog/urls.py from django.urls import path from . import views urlpatterns = [ path('authors/', views.list_authors, name='list_authors'), path('books//', views.list_books_by_author, name='list_books_by_author'), ] ``` The URLs are mapped to the views defined previously using the Django URL dispatcher. ### Include the app URLs in the project Finally, include the `catalog` app URLs in the project's main `urls.py` file, by updating the urlpatterns list: ```python # guide_neon_django/urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('catalog/', include('catalog.urls')), ] ``` ### Run the Django development server To start the Django development server and test the application, run the following command: ```bash python manage.py runserver ``` Navigate to the url `http://localhost:8000/catalog/authors/` in your browser to view the list of authors. You can also view the books by a specific author by visiting `http://localhost:8000/catalog/books//`. ## Applying schema changes We will demonstrate how to handle schema changes by adding a new field `country` to the `Author` model, to store the author's country of origin. ### Update the data model Open the `models.py` file in your `catalog` app directory and add a new field to the `Author` model: ```python # catalog/models.py class Author(models.Model): name = models.CharField(max_length=100) bio = models.TextField(blank=True) country = models.CharField(max_length=100, blank=True) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.name ``` ### Generate and run the migration To generate a new migration file for the schema change, run the following command: ```bash python manage.py makemigrations ``` This command detects the updated `Author` models and generates a new migration file to add the new field to the corresponding table in the database. Now, to apply the migration, run the following command: ```bash python manage.py migrate ``` ### Test the schema change Restart the Django development server. ```bash python manage.py runserver ``` Navigate to the url `http://localhost:8000/catalog/authors` to view the list of authors. You should see the new `country` field included and set to empty for each author entry, reflecting the schema change. ## Conclusion In this guide, we demonstrated how to set up a Django project with Neon Postgres, define database models, and generate migrations and run them. Django's ORM and migration system make it easy to interact with the database and manage schema evolution over time. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Django](https://github.com/neondatabase/guide-neon-django): Run migrations in a Neon-Django project ## Resources For more information on the tools and concepts used in this guide, refer to the following resources: - [Django Documentation](https://docs.djangoproject.com/) - [Neon Postgres](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/guides-django.txt # Connect a Django application to Neon > This document outlines the steps to connect a Django application to a Neon database, detailing configuration settings and necessary code modifications for seamless integration. ## Source - [Connect a Django application to Neon HTML](https://neon.com/docs/guides/django): The original HTML version of this documentation To connect to Neon from a Django application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. To create a Neon project: 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Configure Django connection settings Connecting to Neon requires configuring database connection settings in your Django project's `settings.py` file. **Note**: To avoid the `endpoint ID is not specified` connection issue described [here](https://neon.com/docs/guides/django#connection-issues), be sure that you are using an up-to-date driver. In your Django project, navigate to the `DATABASES` section of your `settings.py` file and modify the connection details as shown: ```python # Add these at the top of your settings.py from os import getenv from dotenv import load_dotenv load_dotenv() # Replace the DATABASES section of your settings.py with this DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': getenv('PGDATABASE'), 'USER': getenv('PGUSER'), 'PASSWORD': getenv('PGPASSWORD'), 'HOST': getenv('PGHOST'), 'PORT': getenv('PGPORT', 5432), 'OPTIONS': { 'sslmode': 'require', }, 'DISABLE_SERVER_SIDE_CURSORS': True, } } ``` **Note**: Neon places computes into an idle state and closes connections after 5 minutes of inactivity (see [Compute lifecycle](https://neon.com/docs/introduction/compute-lifecycle/)). To avoid connection errors, you can set the Django [CONN_MAX_AGE](https://docs.djangoproject.com/en/4.1/ref/settings/#std-setting-CONN_MAX_AGE) setting to 0 to close database connections at the end of each request so that your application does not attempt to reuse connections that were closed by Neon. From Django 4.1, you can use a higher `CONN_MAX_AGE` setting in combination with the [CONN_HEALTH_CHECKS](https://docs.djangoproject.com/en/4.1/ref/settings/#conn-health-checks) setting to enable connection reuse while preventing errors that might occur due to closed connections. For more information about these configuration options, see [Connection management](https://docs.djangoproject.com/en/4.1/ref/databases#connection-management), in the _Django documentation_. You can find all of the connection details listed above by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). For additional information about Django project settings, see [Django Settings: Databases](https://docs.djangoproject.com/en/4.0/ref/settings#databases), in the Django documentation. ## Connection issues - Django uses the `psycopg2` driver as the default adapter for Postgres. If you have an older version of that driver, you may encounter an `Endpoint ID is not specified` error when connecting to Neon. This error occurs if the client library used by your driver does not support the Server Name Indication (SNI) mechanism in TLS, which Neon uses to route incoming connections. The `psycopg2` driver uses the `libpq` client library, which supports SNI as of v14. You can check your `psycopg2` and `libpq` versions by starting a Django shell in your Django project and running the following commands: ```bash # Start a Django shell python3 manage.py shell # Check versions import psycopg2 print("psycopg2 version:", psycopg2.__version__) print("libpq version:", psycopg2._psycopg.libpq_version()) ``` The version number for `libpq` is presented in a different format, for example, version 14.1 will be shown as 140001. If your `libpq` version is less than version 14, you can either upgrade your `psycopg2` driver to get a newer `libpq` version or use one of the workarounds described in our [Connection errors](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified) documentation. Upgrading your `psycopg2` driver may introduce compatibility issues with your Django or Python version, so you should test your application thoroughly. - If you encounter an `SSL SYSCALL error: EOF detected` when connecting to the database, this typically occurs because the application is trying to reuse a connection after the Neon compute has been suspended due to inactivity. To resolve this issue, try one of the following options: - Set your Django [`CONN_MAX_AGE`](https://docs.djangoproject.com/en/5.1/ref/settings/#conn-max-age) setting to a value less than or equal to the scale to zero setting configured for your compute. The default is 5 minutes (300 seconds). - Enable [`CONN_HEALTH_CHECKS`](https://docs.djangoproject.com/en/5.1/ref/settings/#conn-health-checks) by setting it to `true`. This forces a health check to verify that the connection is alive before executing a query. For information configuring Neon's Scale to zero setting, see [Configuring Scale to zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). ## Schema migration with Django For schema migration with Django, see our guide: - [Django Migrations](https://neon.com/docs/guides/django-migrations): Schema migration with Neon Postgres and Django ## Django application blog post and sample application Learn how to use Django with Neon Postgres with this blog post and the accompanying sample application. - [Blog Post: Using Django with Neon](https://neon.com/blog/python-django-and-neons-serverless-postgres): Learn how to build a Django application with Neon Postgres - [Django sample application](https://github.com/evanshortiss/django-neon-quickstart): Django with Neon Postgres ## Community resources - [Django Project: Build a Micro eCommerce with Python, Django, Neon Postgres, Stripe, & TailwindCSS](https://youtu.be/qx9nshX9CQQ?start=1569) --- # Source: https://neon.com/llms/guides-dotnet-entity-framework.txt # Connect an Entity Framework application to Neon > This document guides users on connecting an Entity Framework application to Neon by detailing the necessary steps and configurations required for seamless integration within a .NET environment. ## Source - [Connect an Entity Framework application to Neon HTML](https://neon.com/docs/guides/dotnet-entity-framework): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from an Entity Framework Core application. The example demonstrates how to set up a basic ASP.NET Core Web API project with Entity Framework Core using Npgsql as the database provider. **Note**: The same configuration steps can be used for any .NET application using Entity Framework Core, including ASP.NET Core MVC, Blazor, or console applications. To connect to Neon from an Entity Framework application: 1. [Create a Neon Project](https://neon.com/docs/guides/dotnet-entity-framework#create-a-neon-project) 2. [Create a .NET project and add dependencies](https://neon.com/docs/guides/dotnet-entity-framework#create-a-net-project-and-add-dependencies) 3. [Configure Entity Framework](https://neon.com/docs/guides/dotnet-entity-framework#configure-entity-framework) 4. [Run the application](https://neon.com/docs/guides/dotnet-entity-framework#run-the-application) ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a .NET project and add dependencies 1. Create a new ASP.NET Core Web API project and change to the newly created directory: ```bash dotnet new webapi -n NeonEfExample cd NeonEfExample ``` 2. Delete the files `WeatherForecast.cs` and `Controllers/WeatherForecastController.cs` as we won't be using them: ```bash rm WeatherForecast.cs Controllers/WeatherForecastController.cs ``` 3. Install required packages **Important** IMPORTANT: Ensure you install package versions that match your .NET version. You can verify your .NET version at any time by running `dotnet --version`. ```bash dotnet tool install --global dotnet-ef --version YOUR_DOTNET_VERSION dotnet add package Microsoft.EntityFrameworkCore.Design --version YOUR_DOTNET_VERSION dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL --version YOUR_DOTNET_VERSION ``` ## Configure Entity Framework 1. Create a model class in `Models/Todo.cs`: ```csharp namespace NeonEfExample.Models { public class Todo { public int Id { get; set; } public string? Title { get; set; } public bool IsComplete { get; set; } } } ``` 2. Create a database context in `Data/ApplicationDbContext.cs`: ```csharp using Microsoft.EntityFrameworkCore; using NeonEfExample.Models; namespace NeonEfExample.Data { public class ApplicationDbContext : DbContext { public ApplicationDbContext(DbContextOptions options) : base(options) { } public DbSet Todos => Set(); } } ``` 3. Update `appsettings.json` / `appsettings.Development.json`: Add the connection string: ```json { "ConnectionStrings": { "TodoDbConnection": "Host=your-neon-host;Database=your-db;Username=your-username;Password=your-password;SSL Mode=Require" } } ``` 4. Create a Todo controller in `Controllers/TodoController.cs`: ```csharp using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; using NeonEfExample.Data; using NeonEfExample.Models; namespace NeonEfExample.Controllers { [ApiController] [Route("api/[controller]")] public class TodoController : ControllerBase { private readonly ApplicationDbContext _context; public TodoController(ApplicationDbContext context) { _context = context; } [HttpGet] public async Task>> GetTodos() { return await _context.Todos.ToListAsync(); } [HttpPost] public async Task> PostTodo(Todo todo) { _context.Todos.Add(todo); await _context.SaveChangesAsync(); return CreatedAtAction(nameof(GetTodos), new { id = todo.Id }, todo); } } } ``` 5. Update `Program.cs`: ```csharp using Microsoft.EntityFrameworkCore; using NeonEfExample.Data; var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddDbContext(options => options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection"))); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); app.UseSwagger(); app.UseSwaggerUI(); app.UseAuthorization(); app.MapControllers(); if (app.Environment.IsDevelopment()) { app.Run("http://localhost:5001"); } else { app.UseHttpsRedirection(); app.Run(); } ``` 6. Create and apply the initial migration: ```bash dotnet ef migrations add InitialCreate dotnet ef database update ``` ## Run the application 1. Start the application: ```bash dotnet run ``` 2. Test the connection by navigating to [`http://localhost:5001/swagger`](http://localhost:5001/swagger) in your browser. You can use the Swagger UI to create and retrieve Todo items. ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Entity Framework and Neon](https://github.com/neondatabase/examples/tree/main/with-dotnet-entity-framework) ## Resources - [.NET Documentation](https://learn.microsoft.com/en-us/dotnet/) - [Entity Framework Core](https://learn.microsoft.com/en-us/ef/) --- # Source: https://neon.com/llms/guides-dotnet-npgsql.txt # Connect a .NET (C#) application to Neon Postgres > The document outlines the steps for connecting a .NET (C#) application to a Neon database using the Npgsql library, detailing configuration and connection string setup specific to Neon's environment. ## Source - [Connect a .NET (C#) application to Neon Postgres HTML](https://neon.com/docs/guides/dotnet-npgsql): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a .NET (C#) application using [Npgsql](https://www.npgsql.org/), a .NET data provider for PostgreSQL. You'll build a console application that demonstrates how to connect to your Neon database and perform basic Create, Read, Update, and Delete (CRUD) operations. **Note**: The same configuration steps can be used for any .NET application type, including ASP.NET Core Web API, MVC, Blazor, or Windows Forms applications. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - The [.NET 8 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) or later. > _Other versions of .NET may work, but this guide is primarily tested with .NET 8._ ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your .NET application. ## Create a .NET project For your .NET project, you will create a project directory and add the required packages using the `dotnet` CLI. 1. Create a new console application and change into the newly created directory. ```bash dotnet new console -o NeonLibraryExample cd NeonLibraryExample ``` > Open this directory in your preferred code editor (e.g., VS Code, Visual Studio). 2. Add the required NuGet packages using `dotnet add package`. - `Npgsql`: The .NET data provider for PostgreSQL. - `Microsoft.Extensions.Configuration.Json`: To read configuration from `appsettings.json`. - `Microsoft.Extensions.Configuration.Binder`: To bind configuration values to objects. ```bash dotnet add package Npgsql dotnet add package Microsoft.Extensions.Configuration.Json dotnet add package Microsoft.Extensions.Configuration.Binder ``` ## Store your Neon connection string Create a file named `appsettings.json` in your project's root directory. This is the standard .NET approach for storing configuration data like connection strings. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Select **.NET** as your connection method. 4. Copy the **pooled** connection string, which includes your password. 5. Create an `appsettings.json` file in your project's root directory and add the connection string to it as shown below. ```json { "ConnectionStrings": { "DefaultConnection": "Host=your-neon-host;Database=your-database;Username=your-username;Password=your-password;SSL Mode=VerifyFull; Channel Binding=Require" } } ``` > Replace `your-neon-host`, `your-database`, `your-username`, and `your-password` with the actual values from your Neon connection string. **Note**: To ensure the security of your data, never commit your credentials to version control. In a production application, consider using environment variables or a secure secrets management solution to store sensitive information like connection strings. ## Write the application code You will now write the C# code to connect to Neon and perform database operations. All the code will be in a single file named `Program.cs` which is the entry point of your console application. Replace the contents of your `Program.cs` file with the following code: ```csharp using Microsoft.Extensions.Configuration; using Npgsql; using System.Text; // --- 1. Read configuration and build connection string --- var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json") .Build(); var connectionString = config.GetConnectionString("DefaultConnection"); // --- 2. Establish connection and perform CRUD operations --- await using var conn = new NpgsqlConnection(connectionString); try { await conn.OpenAsync(); Console.WriteLine("Connection established"); // --- CREATE a table and INSERT data --- await using (var cmd = new NpgsqlCommand()) { cmd.Connection = conn; cmd.CommandText = "DROP TABLE IF EXISTS books;"; await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Finished dropping table (if it existed)."); cmd.CommandText = @" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE );"; await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Finished creating table."); cmd.CommandText = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (@t1, @a1, @y1, @s1);"; cmd.Parameters.AddWithValue("t1", "The Catcher in the Rye"); cmd.Parameters.AddWithValue("a1", "J.D. Salinger"); cmd.Parameters.AddWithValue("y1", 1951); cmd.Parameters.AddWithValue("s1", true); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Inserted a single book."); cmd.Parameters.Clear(); var booksToInsert = new[] { new { Title = "The Hobbit", Author = "J.R.R. Tolkien", Year = 1937, InStock = true }, new { Title = "1984", Author = "George Orwell", Year = 1949, InStock = true }, new { Title = "Dune", Author = "Frank Herbert", Year = 1965, InStock = false } }; foreach (var book in booksToInsert) { cmd.CommandText = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (@title, @author, @year, @in_stock);"; cmd.Parameters.AddWithValue("title", book.Title); cmd.Parameters.AddWithValue("author", book.Author); cmd.Parameters.AddWithValue("year", book.Year); cmd.Parameters.AddWithValue("in_stock", book.InStock); await cmd.ExecuteNonQueryAsync(); cmd.Parameters.Clear(); } Console.WriteLine("Inserted 3 rows of data."); } // --- READ the initial data --- await ReadDataAsync(conn, "Book Library"); // --- UPDATE data --- await using (var cmd = new NpgsqlCommand("UPDATE books SET in_stock = @in_stock WHERE title = @title;", conn)) { cmd.Parameters.AddWithValue("in_stock", true); cmd.Parameters.AddWithValue("title", "Dune"); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Updated stock status for 'Dune'."); } // --- READ data after update --- await ReadDataAsync(conn, "Book Library After Update"); // --- DELETE data --- await using (var cmd = new NpgsqlCommand("DELETE FROM books WHERE title = @title;", conn)) { cmd.Parameters.AddWithValue("title", "1984"); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Deleted the book '1984' from the table."); } // --- READ data after delete --- await ReadDataAsync(conn, "Book Library After Delete"); } catch (Exception e) { Console.WriteLine("Connection failed."); Console.WriteLine(e.Message); } // Helper function to read data and print it to the console async Task ReadDataAsync(NpgsqlConnection conn, string title) { Console.WriteLine($"\n--- {title} ---"); await using var cmd = new NpgsqlCommand("SELECT * FROM books ORDER BY publication_year;", conn); await using var reader = await cmd.ExecuteReaderAsync(); var books = new StringBuilder(); while (await reader.ReadAsync()) { books.AppendLine( $"ID: {reader.GetInt32(0)}, " + $"Title: {reader.GetString(1)}, " + $"Author: {reader.GetString(2)}, " + $"Year: {reader.GetInt32(3)}, " + $"In Stock: {reader.GetBoolean(4)}" ); } Console.WriteLine(books.ToString().TrimEnd()); Console.WriteLine("--------------------\n"); } ``` ## Examples This section walks through the code in `Program.cs`, explaining how each part performs a specific CRUD operation. ### Create a table and insert data This snippet connects to your database, creates a `books` table, and populates it with initial data. ```csharp await using var conn = new NpgsqlConnection(connectionString); await conn.OpenAsync(); Console.WriteLine("Connection established"); await using (var cmd = new NpgsqlCommand()) { cmd.Connection = conn; cmd.CommandText = "DROP TABLE IF EXISTS books;"; await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Finished dropping table (if it existed)."); cmd.CommandText = @" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE );"; await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Finished creating table."); cmd.CommandText = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (@t1, @a1, @y1, @s1);"; cmd.Parameters.AddWithValue("t1", "The Catcher in the Rye"); cmd.Parameters.AddWithValue("a1", "J.D. Salinger"); cmd.Parameters.AddWithValue("y1", 1951); cmd.Parameters.AddWithValue("s1", true); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Inserted a single book."); cmd.Parameters.Clear(); var booksToInsert = new[] { new { Title = "The Hobbit", Author = "J.R.R. Tolkien", Year = 1937, InStock = true }, new { Title = "1984", Author = "George Orwell", Year = 1949, InStock = true }, new { Title = "Dune", Author = "Frank Herbert", Year = 1965, InStock = false } }; foreach (var book in booksToInsert) { cmd.CommandText = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (@title, @author, @year, @in_stock);"; cmd.Parameters.AddWithValue("title", book.Title); cmd.Parameters.AddWithValue("author", book.Author); cmd.Parameters.AddWithValue("year", book.Year); cmd.Parameters.AddWithValue("in_stock", book.InStock); await cmd.ExecuteNonQueryAsync(); cmd.Parameters.Clear(); } Console.WriteLine("Inserted 3 rows of data."); } ``` In the code above, you: - Open a connection to your Neon database asynchronously. The `await using` statement ensures the connection is properly closed and disposed of. - Drop the `books` table if it exists to ensure a clean start. - Create a new `books` table with columns for book details. - Insert a single book record using a parameterized query to prevent SQL injection. - Insert three more books by looping through a collection. When this code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data This snippet calls a helper function, `ReadDataAsync`, to retrieve and display all the books currently in the table. ```csharp // The helper function definition async Task ReadDataAsync(NpgsqlConnection conn, string title) { Console.WriteLine($"\n--- {title} ---"); await using var cmd = new NpgsqlCommand("SELECT * FROM books ORDER BY publication_year;", conn); await using var reader = await cmd.ExecuteReaderAsync(); var books = new StringBuilder(); while (await reader.ReadAsync()) { books.AppendLine( $"ID: {reader.GetInt32(0)}, " + $"Title: {reader.GetString(1)}, " + $"Author: {reader.GetString(2)}, " + $"Year: {reader.GetInt32(3)}, " + $"In Stock: {reader.GetBoolean(4)}" ); } Console.WriteLine(books.ToString().TrimEnd()); Console.WriteLine("--------------------\n"); } // How the function is called await ReadDataAsync(conn, "Book Library"); ``` In the code above, you: - Execute a SQL `SELECT` statement to fetch all rows from the `books` table, ordered by publication year. - Use an `NpgsqlDataReader` to iterate through the result set row by row. - Read the column values for each row and format them into a string for display. After the initial data insert, the output is: ```text --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: False -------------------- ``` ### Update data This snippet updates the stock status for the book 'Dune' from `false` to `true`. ```csharp await using (var cmd = new NpgsqlCommand("UPDATE books SET in_stock = @in_stock WHERE title = @title;", conn)) { cmd.Parameters.AddWithValue("in_stock", true); cmd.Parameters.AddWithValue("title", "Dune"); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Updated stock status for 'Dune'."); } // Calling ReadDataAsync again to see the result await ReadDataAsync(conn, "Book Library After Update"); ``` In the code above, you: - Execute a SQL `UPDATE` statement with parameters to identify the row to update (`WHERE title = @title`) and the new value (`SET in_stock = @in_stock`). - Call `ReadDataAsync` again to show that the change was successful. The output from this operation is: ```text Updated stock status for 'Dune'. --- Book Library After Update --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: True -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `True`. ### Delete data This final snippet removes the book '1984' from the `books` table. ```csharp await using (var cmd = new NpgsqlCommand("DELETE FROM books WHERE title = @title;", conn)) { cmd.Parameters.AddWithValue("title", "1984"); await cmd.ExecuteNonQueryAsync(); Console.WriteLine("Deleted the book '1984' from the table."); } // Calling ReadDataAsync one last time await ReadDataAsync(conn, "Book Library After Delete"); ``` In the code above, you: - Execute a SQL `DELETE` statement with a `WHERE` clause to target the specific book for removal. - Call `ReadDataAsync` a final time to verify that the row was deleted. The output from this operation is: ```text Deleted the book '1984' from the table. --- Book Library After Delete --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: True -------------------- ``` > You can see that the book '1984' has been successfully removed from the table. ## Run the application To run the entire script, execute the following command from your project directory: ```bash dotnet run ``` This command would compile and execute your application, connecting to the Neon database and performing all the CRUD operations defined in `Program.cs` as described above. You should see output in your console similar to the examples provided in the previous sections, indicating the success of each operation. ## Next steps: Using an ORM or framework While this guide demonstrates how to connect to Neon using raw SQL queries, for more advanced and maintainable data interactions in your .NET applications, consider using an Object-Relational Mapping (ORM) framework. ORMs not only let you work with data as objects but also help manage schema changes through automated migrations keeping your database structure in sync with your application models. Explore the following resources to learn how to integrate ORMs with Neon: - [Connect an Entity Framework application to Neon](https://neon.com/docs/guides/dotnet-entity-framework) ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with .NET (C#) and Neon](https://github.com/neondatabase/examples/tree/main/with-dotnet-npgsql/NeonLibraryExample) ## Resources - [Npgsql Documentation](https://www.npgsql.org/doc/index.html) - [.NET Documentation](https://learn.microsoft.com/en-us/dotnet/) - [Connect an Entity Framework application to Neon](https://neon.com/docs/guides/dotnet-entity-framework) --- # Source: https://neon.com/llms/guides-drizzle-migrations.txt # Schema migration with Neon Postgres and Drizzle ORM > The document outlines the process of performing schema migrations using Neon Postgres and Drizzle ORM, detailing the steps for setting up and executing migrations within this specific environment. ## Source - [Schema migration with Neon Postgres and Drizzle ORM HTML](https://neon.com/docs/guides/drizzle-migrations): The original HTML version of this documentation [Drizzle](https://orm.drizzle.team/) is a TypeScript-first ORM that connects to all major databases and works across most Javascript runtimes. It provides a simple way to define database schemas and queries in an SQL-like dialect and tools to generate and run migrations. This guide shows how to use `Drizzle` with the `Neon` Postgres database in a Typescript project. We'll create a simple Node.js application with `Hono.js` and demonstrate the full workflow of setting up and working with your database using `Drizzle`. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the `New Project` button to create a new one. ### Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. **Note**: Neon supports both direct and pooled database connection strings, which you can find by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can lead to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting up the TypeScript application ### Create a new Hono.js project We'll create a simple catalog, with API endpoints that query the database for authors and a list of their books. Run the following command in your terminal to set up a new project using `Hono.js`: ```bash npm create hono@latest neon-drizzle-guide ``` This initiates an interactive CLI prompt to set up a new project. To follow along with this guide, you can use the following settings: ```bash Need to install the following packages: create-hono@0.9.0 Ok to proceed? (y) y create-hono version 0.9.0 ✔ Using target directory … neon-drizzle-guide ✔ Which template do you want to use? › nodejs cloned honojs/starter#main to ./repos/javascript/neon-drizzle-guide ✔ Do you want to install project dependencies? … yes ✔ Which package manager do you want to use? › npm ``` To use Drizzle and connect to the Neon database, we also add the `drizzle-orm` and `drizzle-kit` packages to our project, along with the `Neon serverless` driver library. ```bash cd neon-drizzle-guide && touch .env npm install drizzle-orm @neondatabase/serverless npm install -D drizzle-kit dotenv ``` Add the `DATABASE_URL` environment variable to your `.env` file, which you'll use to connect to our Neon database. Use the connection string that you obtained from the Neon Console earlier: ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` Test that the starter `Hono.js` application works by running `npm run dev` in the terminal. You should see the `Hello, Hono!` message when you navigate to `http://localhost:3000` in your browser. ### Set up the database schema Now, we will define the schema for the application using the `Drizzle` ORM. Create a new `schema.ts` file in your `src` directory and add the following code: ```typescript // src/schema.ts import { pgTable, integer, serial, text, timestamp } from 'drizzle-orm/pg-core'; export const authors = pgTable('authors', { id: serial('id').primaryKey(), name: text('name').notNull(), bio: text('bio'), createdAt: timestamp('created_at').notNull().defaultNow(), }); export const books = pgTable('books', { id: serial('id').primaryKey(), title: text('title').notNull(), authorId: integer('author_id').references(() => authors.id), createdAt: timestamp('created_at').notNull().defaultNow(), }); ``` The code defines two tables: `authors`, which will contain the list of all the authors, and `books`, which will contain the list of books written by the authors. Each book is associated with an author using the `authorId` field. To generate a migration to create these tables in the database, we'll use the `drizzle-kit` command. Add the following script to the `package.json` file at the root of your project: ```json { "scripts": { "db:generate": "drizzle-kit generate --dialect=postgresql --schema=src/schema.ts --out=./drizzle" } } ``` Then, run the following command in your terminal to generate the migration files: ```bash npm run db:generate ``` This command generates a new folder named `drizzle` containing the migration files for the `authors` and `books` tables. ### Run the migration The generated migration file is written in SQL and contains the necessary commands to create the tables in the database. To apply these migrations, we'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) and helper functions provided by the `drizzle-orm` library. Create a new `migrate.ts` in your `src` directory and add the following code: ```typescript // src/migrate.ts import { drizzle } from 'drizzle-orm/neon-http'; import { neon } from '@neondatabase/serverless'; import { migrate } from 'drizzle-orm/neon-http/migrator'; import { config } from 'dotenv'; config({ path: '.env' }); const sql = neon(process.env.DATABASE_URL!); const db = drizzle(sql); const main = async () => { try { await migrate(db, { migrationsFolder: 'drizzle' }); console.log('Migration completed'); } catch (error) { console.error('Error during migration:', error); process.exit(1); } }; main(); ``` The `drizzle-orm` package comes with an integration for `Neon`, which allows us to run the migrations using the `migrate` function. Add a new script to the `package.json` file that executes the migration. ```json { "scripts": { "db:migrate": "tsx ./src/migrate.ts" } } ``` You can now run the migration script using the following command: ```bash npm run db:migrate ``` You should see the `Migration completed` message in the terminal, indicating that the migration was successful. ### Seed the database To test the application works, we need to add some example data to our tables. Create a new file at `src/seed.ts` and add the following code to it: ```typescript // src/seed.ts import { drizzle } from 'drizzle-orm/neon-http'; import { neon } from '@neondatabase/serverless'; import { authors, books } from './schema'; import { config } from 'dotenv'; config({ path: '.env' }); const sql = neon(process.env.DATABASE_URL!); const db = drizzle(sql); async function seed() { await db.insert(authors).values([ { name: 'J.R.R. Tolkien', bio: 'The creator of Middle-earth and author of The Lord of the Rings.', }, { name: 'George R.R. Martin', bio: 'The author of the epic fantasy series A Song of Ice and Fire.', }, { name: 'J.K. Rowling', bio: 'The creator of the Harry Potter series.', }, ]); const authorRows = await db.select().from(authors); const authorIds = authorRows.map((row) => row.id); await db.insert(books).values([ { title: 'The Fellowship of the Ring', authorId: authorIds[0], }, { title: 'The Two Towers', authorId: authorIds[0], }, { title: 'The Return of the King', authorId: authorIds[0], }, { title: 'A Game of Thrones', authorId: authorIds[1], }, { title: 'A Clash of Kings', authorId: authorIds[1], }, { title: "Harry Potter and the Philosopher's Stone", authorId: authorIds[2], }, { title: 'Harry Potter and the Chamber of Secrets', authorId: authorIds[2], }, ]); } async function main() { try { await seed(); console.log('Seeding completed'); } catch (error) { console.error('Error during seeding:', error); process.exit(1); } } main(); ``` This script inserts some seed data into the `authors` and `books` tables. Add a new script to the `package.json` file that runs the seeding program. ```json { "scripts": { "db:seed": "tsx ./src/seed.ts" } } ``` Run the seed script using the following command: ```bash npm run db:seed ``` You should see the `Seeding completed` message in the terminal, indicating that the seed data was inserted into the database. ### Implement the API endpoints Now that the database is set up and populated with data, we can implement the API to query the authors and their books. Replace the existing `src/index.ts` file with the following code: ```typescript // src/index.ts import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { env } from 'hono/adapter'; import { config } from 'dotenv'; import { eq } from 'drizzle-orm'; import { drizzle } from 'drizzle-orm/neon-http'; import { neon } from '@neondatabase/serverless'; import { authors, books } from './schema'; config({ path: '.env' }); const app = new Hono(); app.get('/', (c) => { return c.text('Hello, this is a catalog of books!'); }); app.get('/authors', async (c) => { const { DATABASE_URL } = env<{ DATABASE_URL: string }>(c); const sql = neon(DATABASE_URL); const db = drizzle(sql); const output = await db.select().from(authors); return c.json(output); }); app.get('/books/:authorId', async (c) => { const { DATABASE_URL } = env<{ DATABASE_URL: string }>(c); const sql = neon(DATABASE_URL); const db = drizzle(sql); const authorId = c.req.param('authorId'); const output = await db .select() .from(books) .where(eq(books.authorId, Number(authorId))); return c.json(output); }); const port = 3000; console.log(`Server is running on port ${port}`); serve({ fetch: app.fetch, port, }); ``` This code sets up a simple API with two endpoints: `/authors` and `/books/:authorId`. The `/authors` endpoint returns a list of all the authors, and the `/books/:authorId` endpoint returns a list of books written by the specific author with the given `authorId`. Run the application using the following command: ```bash npm run dev ``` This will start a `Hono.js` server at `http://localhost:3000`. Navigate to `http://localhost:3000/authors` and `http://localhost:3000/books/1` in your browser to check that the API works as expected. ## Migration after a schema change To demonstrate how to execute a schema change, we'll add a new column to the `authors` table, listing the country of origin for each author. ### Generate the new migration Modify the code in the `src/schema.ts` file to add the new column to the `authors` table: ```typescript // src/schema.ts import { pgTable, integer, serial, text, timestamp } from 'drizzle-orm/pg-core'; export const authors = pgTable('authors', { id: serial('id').primaryKey(), name: text('name').notNull(), bio: text('bio'), country: text('country'), createdAt: timestamp('created_at').notNull().defaultNow(), }); export const books = pgTable('books', { id: serial('id').primaryKey(), title: text('title').notNull(), authorId: integer('author_id').references(() => authors.id), createdAt: timestamp('created_at').notNull().defaultNow(), }); ``` Now, we can run the following command to generate a new migration file: ```bash npm run db:generate ``` This command generates a new migration file in the `drizzle` folder, with the SQL command to add the new column to the `authors` table. ### Run the migration Run the migration script using the following command: ```bash npm run db:migrate ``` You should see the `Migration completed` message in the terminal, indicating it was successful. ### Verify the schema change To verify that the schema change was successful, run the application using the following command: ```bash npm run dev ``` You can navigate to `http://localhost:3000/authors` in your browser to check that each author entry has a `country` field, currently set to `null`. ## Conclusion In this guide, we set up a new TypeScript project using `Hono.js` and `Drizzle` ORM and connected it to a `Neon` Postgres database. We created a schema for the database, generated and ran migrations, and implemented API endpoints to query the database. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Drizzle](https://github.com/neondatabase/guide-neon-drizzle): Run Neon database migrations using Drizzle ## Resources For more information on the tools used in this guide, refer to the following resources: - [Drizzle ORM](https://orm.drizzle.team/) - [Hono.js](https://hono.dev/) --- # Source: https://neon.com/llms/guides-drizzle.txt # Connect from Drizzle to Neon > The document outlines the steps required to establish a connection between Drizzle, a lightweight TypeScript ORM, and Neon, a serverless PostgreSQL database platform, detailing configuration and integration processes specific to Neon's environment. ## Source - [Connect from Drizzle to Neon HTML](https://neon.com/docs/guides/drizzle): The original HTML version of this documentation What you will learn: - How to connect from Drizzle using different drivers - How to configure Drizzle Kit for migrations Related resources: - [Drizzle with Neon Postgres (Drizzle Docs)](https://orm.drizzle.team/docs/tutorials/drizzle-with-neon) - [Schema migration with Drizzle ORM](https://neon.com/docs/guides/drizzle-migrations) Source code: - [Next.js Edge Functions with Drizzle](https://github.com/neondatabase/examples/tree/main/with-nextjs-drizzle-edge) Drizzle is a modern ORM for TypeScript that provides a simple and type-safe way to interact with your database. This guide demonstrates how to connect your application to a Neon Postgres database using Drizzle ORM. **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for Drizzle ORM with Neon](https://neon.com/docs/ai/ai-rules-neon-drizzle) to help your AI assistant generate better code when using Drizzle with your Neon database. To connect a TypeScript/Node.js project to Neon using Drizzle ORM, follow these steps: ## Create a TypeScript/Node.js project Create a new directory for your project and navigate into it: ```bash mkdir my-drizzle-neon-project cd my-drizzle-neon-project ``` Initialize a new Node.js project with a `package.json` file: ```bash npm init -y ``` ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Get your connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a user, and the database you want to connect to. A connection string is constructed for you. The connection string includes the user name, password, hostname, and database name. Create a `.env` file in your project's root directory and add the connection string to it. Your `.env` file should look like this: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` ## Install Drizzle and a driver Install Drizzle ORM, Drizzle Kit for migrations, and your preferred database driver. Choose one of the following drivers based on your application's needs: Tab: Neon Serverless (HTTP) Use the Neon serverless HTTP driver for serverless environments (e.g., Vercel, Netlify). ```bash npm install drizzle-orm @neondatabase/serverless dotenv npm install -D drizzle-kit ``` Tab: Neon WebSocket Use the Neon WebSocket driver for long-running applications that require a persistent connection (e.g., a standard Node.js server). ```bash npm install drizzle-orm @neondatabase/serverless ws dotenv npm install -D drizzle-kit @types/ws ``` Tab: node-postgres Use the classic `node-postgres` (`pg`) driver, a widely-used and stable choice for Node.js applications. ```bash npm install drizzle-orm pg dotenv npm install -D drizzle-kit @types/pg ``` Tab: postgres.js Use the `postgres.js` driver, a modern and lightweight Postgres client for Node.js. ```bash npm install drizzle-orm postgres dotenv npm install -D drizzle-kit ``` ## Configure Drizzle Kit Drizzle Kit uses a configuration file to manage schema and migrations. Create a `drizzle.config.ts` file in your project root and add the following content. This configuration tells Drizzle where to find your schema and where to output migration files. ```typescript import 'dotenv/config'; import { defineConfig } from 'drizzle-kit'; if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL is not set in the .env file'); } export default defineConfig({ schema: './src/schema.ts', // Your schema file path out: './drizzle', // Your migrations folder dialect: 'postgresql', dbCredentials: { url: process.env.DATABASE_URL, }, }); ``` ## Initialize the Drizzle client Create a file: `src/db.ts`, to initialize and export your Drizzle client. The setup varies depending on the driver you installed. Tab: Neon Serverless (HTTP) ```typescript import 'dotenv/config'; import { drizzle } from 'drizzle-orm/neon-http'; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL!); export const db = drizzle(sql); ``` Tab: Neon WebSocket ```typescript import 'dotenv/config'; import { drizzle } from 'drizzle-orm/neon-serverless'; import { Pool, neonConfig } from '@neondatabase/serverless'; import ws from 'ws'; // For Node.js environments older than v22, you must provide a WebSocket constructor neonConfig.webSocketConstructor = ws; // To work in edge environments (Cloudflare Workers, Vercel Edge, etc.), enable querying over fetch // neonConfig.poolQueryViaFetch = true const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); export const db = drizzle(pool); ``` Tab: node-postgres ```typescript import 'dotenv/config'; import { drizzle } from 'drizzle-orm/node-postgres'; import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL!, }); export const db = drizzle(pool); ``` Tab: postgres.js ```typescript import 'dotenv/config'; import { drizzle } from 'drizzle-orm/postgres-js'; import postgres from 'postgres'; const client = postgres(process.env.DATABASE_URL!); export const db = drizzle(client); ``` ## Create a schema Drizzle uses a schema-first approach, allowing you to define your database schema using TypeScript. This schema will be used to generate migrations and ensure type safety throughout your application. The following example defines a schema for a simple `demo_users` table. Create a `src/schema.ts` file and add the following content: ```typescript import { pgTable, serial, text } from 'drizzle-orm/pg-core'; export const demoUsers = pgTable('demo_users', { id: serial('id').primaryKey(), name: text('name'), }); ``` ## Generate migrations After defining your schema, you can generate migration files with Drizzle Kit. This will create the necessary SQL files to set up your database schema in Neon. ```bash npx drizzle-kit generate ``` You should see output similar to the following, indicating that migration files have been created: ```bash $ npx drizzle-kit generate No config path provided, using default 'drizzle.config.ts' Reading config file '/home/user/drizzle/drizzle.config.ts' 1 tables demo_users 2 columns 0 indexes 0 fks [✓] Your SQL migration file ➜ drizzle/0000_clever_purple_man.sql 🚀 ``` You can find the generated SQL migration files in the `drizzle` directory specified in your `drizzle.config.ts`. ## Apply migrations Apply the generated migrations (SQL files) to your Neon database using Drizzle Kit. This command will use the `drizzle.config.ts` file for database connection details and apply the migrations to your Neon database. ```bash npx drizzle-kit migrate ``` You should see output similar to the following, indicating that the migrations have been applied successfully: ```bash $ npx drizzle-kit migrate No config path provided, using default 'drizzle.config.ts' Reading config file '/home/user/drizzle/drizzle.config.ts' Using 'pg' driver for database querying ``` You can verify that the `demo_users` table has been created in your Neon database by checking the **Tables** section in the Neon Console. ## Query the database Create a file: `src/index.ts`, to interact with your database using the Drizzle client. Here's an example of inserting a new user and querying all users from the `demo_users` table: Tab: Neon Serverless (HTTP) ```typescript import { db } from './db'; import { demoUsers } from './schema'; async function main() { try { await db.insert(demoUsers).values({ name: 'John Doe' }); const result = await db.select().from(demoUsers); console.log('Successfully queried the database:', result); } catch (error) { console.error('Error querying the database:', error); } } main(); ``` Tab: Neon WebSocket / node-postgres / postgres.js ```typescript import { db } from './db'; import { demoUsers } from './schema'; async function main() { try { await db.insert(demoUsers).values({ name: 'John Doe' }); const result = await db.select().from(demoUsers); console.log('Successfully queried the database:', result); } catch (error) { console.error('Error querying the database:', error); } finally { // Close the database connection to ensure proper shutdown for Neon WebSocket, node-postgres, and postgres.js drivers await db.$client.end(); } } main(); ``` Run the script using `tsx`: ```bash npx tsx src/index.ts ``` You should see output similar to the following, indicating that the user was inserted and queried successfully: ```bash Successfully queried the database: [ { id: 1, name: 'John Doe' } ] ``` ## Resources - [Get Started with Drizzle and Neon](https://orm.drizzle.team/docs/get-started/neon-new) - [Drizzle with Neon Postgres](https://orm.drizzle.team/docs/tutorials/drizzle-with-neon) - [Schema migration with Neon Postgres and Drizzle ORM](https://neon.com/docs/guides/drizzle-migrations) - [Todo App with Neon Postgres and Drizzle ORM](https://orm.drizzle.team/docs/tutorials/drizzle-nextjs-neon) --- # Source: https://neon.com/llms/guides-elixir-ecto.txt # Connect from Elixir with Ecto to Neon > The document outlines the steps for connecting an Elixir application to a Neon database using the Ecto library, detailing configuration settings and necessary code snippets for seamless integration. ## Source - [Connect from Elixir with Ecto to Neon HTML](https://neon.com/docs/guides/elixir-ecto): The original HTML version of this documentation This guide describes how to connect from an Elixir application with Ecto, which is a database wrapper and query generator for Elixir. Ecto provides an API and abstractions for interacting databases, enabling Elixir developers to query any database using similar constructs. The instructions in this guide follow the steps outlined in the [Ecto Getting Started](https://hexdocs.pm/ecto/getting-started.html#content) guide, modified to demonstrate connecting to a Neon Serverless Postgres database. It is assumed that you have a working installation of [Elixir](https://elixir-lang.org/install.html). To connect to Neon from Elixir with Ecto: ## Create a database in Neon and copy the connection string The instructions in this configuration use a database named `friends`. To create the database: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. Select a project. 1. Select **Databases**. 1. Select the branch where you want to create the database. 1. Click **New Database**. 1. Enter a database name (`friends`), and select a database owner. 1. Click **Create**. Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a role, and the database you want to connect to. A connection string is constructed for you. Your connection string should look something like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-west-2.aws.neon.tech/friends?sslmode=require&channel_binding=require ``` You will need the connection string details later in the setup. ## Create an Elixir project Create an Elixir application called `friends`. ```bash mix new friends --sup ``` The `--sup` option ensures that the application has a supervision tree, which is required by Ecto. ## Add Ecto and Postgrex to the application 1. Add the Ecto and the Postgrex driver dependencies to the `mix.exs` file by updating the `deps` definition in the file to include those items. For example: ```bash defp deps do [ {:ecto_sql, "~> 3.0"}, {:postgrex, ">= 0.18.0"} ] end ``` Ecto provides the common querying API. The Postgrex driver acts as a bridge between Ecto and Postgres. Ecto interfaces with its own `Ecto.Adapters.Postgres` module, which communicates to Postgres through the Postgrex driver. 2. Install the Ecto and the Postgrex driver dependencies by running the following command in your application directory: ```bash mix deps.get ``` ## Configure Ecto Run the following command in your application directory to generate the configuration required to connect from Ecto to your Neon database. ```bash mix ecto.gen.repo -r Friends.Repo ``` Follow these steps to complete the configuration: 1. The first part of the configuration generated by the `mix ecto.gen.repo` command is found in the `config/config.exs` file. Update this configuration with your Neon database connection details. Use the connection details from the Neon connection string you copied in the first part of the guide. Your `hostname` will differ from the example below. ```elixir config :friends, Friends.Repo, database: "friends", username: "alex", password: "AbC123dEf", hostname: "ep-cool-darkness-123456.us-west-2.aws.neon.tech", ssl: [cacerts: :public_key.cacerts_get()] ``` The `:ssl` option is required to connect to Neon. Postgrex, since v0.18, verifies the server SSL certificate and you need to select CA trust store using `:cacerts` or `:cacertfile` options. You can use the OS-provided CA store by setting `cacerts: :public_key.cacerts_get()`. While not recommended, you can disable certificate verification by setting `ssl: [verify: :verify_none]`. **Note**: Postgrex has an `:idle_interval` connection parameter that defines an interval for pinging connections after a period of inactivity. The default setting is `1000ms`. If you rely on Neon's [autosuspend](https://neon.com/docs/introduction/auto-suspend) feature to scale your compute to zero when your database is not active, this setting will prevent that. For more, see [Postgrex: DBConnection ConnectionError ssl send: closed](https://neon.com/docs/connect/connection-errors#postgrex-dbconnection-connectionerror-ssl-send-closed). 2. The second part of the configuration generated by the `mix ecto.gen.repo` command is the `Ecto.Repo` module, found in `lib/friends/repo.ex`. You shouldn't have to make any changes here, but verify that the following configuration is present: ```elixir defmodule Friends.Repo do use Ecto.Repo, otp_app: :friends, adapter: Ecto.Adapters.Postgres end ``` Ecto uses the module definition to query the database. The `otp_app` setting tells Ecto where to find the database configuration. In this case, the `:friends` application is specified, so Ecto will use the configuration defined in the that application's `config/config.exs` file. The `:adapter` option defines the Postgres adapter. 3. Next, the `Friends.Repo` must be defined as a supervisor within the application's supervision tree. In `lib/friends/application.ex`, make sure `Friends.Repo` is specified in the `start` function, as shown: ```elixir def start(_type, _args) do children = [ Friends.Repo, ] ``` This configuration starts the Ecto process, enabling it to receive and execute the application's queries. 4. The final part of the configuration is to add the following line under the configuration in the `config/config.exs` file that you updated in the first step: ```elixir config :friends, ecto_repos: [Friends.Repo] ``` This line tells the application about the new repo, allowing you to run commands such as `mix ecto.migrate`, which you will use in a later step to create a table in your database. ## Create a migration and add a table Your `friends` database is currently empty. It has no tables or data. In this step, you will add a table. To do so, you will create a "migration" by running the following command in your application directory: ```bash mix ecto.gen.migration create_people ``` The command generates an empty migration file in `priv/repo/migrations`, which looks like this: ```elixir defmodule Friends.Repo.Migrations.CreatePeople do use Ecto.Migration def change do end end ``` Add code to the migration file to create a table called `people`. For example: ```elixir defmodule Friends.Repo.Migrations.CreatePeople do use Ecto.Migration def change do create table(:people) do add :first_name, :string add :last_name, :string add :age, :integer end end end ``` To run the migration and create the `people` table in your database, which also verifies your connection to Neon, run the following command from your application directory: ```bash mix ecto.migrate ``` The output of this command should appear similar to the following: ```bash 14:30:04.924 [info] == Running 20230524172817 Friends.Repo.Migrations.CreatePeople.change/0 forward 14:30:04.925 [info] create table people 14:30:05.014 [info] == Migrated 20230524172817 in 0.0s ``` You can use the **Tables** feature in the Neon Console to view the table that was created: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. Select a project. 1. Select **Tables** from the sidebar. 1. Select the Branch, Database (`friends`), and the schema (`public`). You should see the `people` table along with a `schema_migration` table that was created by the migration. ## Application code You can find the application code for the example above on GitHub. - [Neon Ecto Getting Started App](https://github.com/neondatabase/neon-ecto-getting-started-app): Learn how to connect from Elixir with Ecto to Neon ## Next steps The [Ecto Getting Started Guide](https://hexdocs.pm/ecto/getting-started.html#content) provides additional steps that you can follow to create a schema, insert data, and run queries. See [Creating the schema](https://hexdocs.pm/ecto/getting-started.html#creating-the-schema) in the _Ecto Getting Started Guide_ to pick up where the steps in this guide leave off. ## Usage notes - Suppose you have `PGHOST` environment variable on your system set to something other than your Neon hostname. In that case, this hostname will be used instead of the Neon `hostname` defined in your Ecto Repo configuration when running `mix ecto` commands. To avoid this issue, you can either set the `PGHOST` environment variable to your Neon hostname or specify `PGHOST=""` when running `mix ecto` commands; for example: `PGHOST="" mix ecto.migrate`. - Neon's _Scale to Zero_ feature scales computes to zero after 300 seconds (5 minutes) of inactivity, which can result in a `connection not available` error when running `mix ecto` commands. Typically, a Neon compute takes a few hundred milliseconds to transition from `Idle` to `Active`. Wait a second or two and try running the command again. Alternatively, consider the strategies outlined in [Connection latency and timeouts](https://neon.com/docs/connect/connection-latency) to manage connection issues resulting from compute suspension. --- # Source: https://neon.com/llms/guides-elixir.txt # Connect an Elixir application to Neon Postgres > The document outlines the steps to connect an Elixir application to a Neon Postgres database, detailing the necessary configurations and code implementations required for successful integration. ## Source - [Connect an Elixir application to Neon Postgres HTML](https://neon.com/docs/guides/elixir): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from an Elixir application using [Postgrex](https://hex.pm/packages/postgrex), a high-performance, concurrent, and robust PostgreSQL driver for Elixir. You'll learn how to connect to your Neon database from an Elixir application, and perform basic Create, Read, Update, and Delete (CRUD) operations. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - Elixir v1.12 or later. If you do not have Elixir installed, see the [official installation guide](https://elixir-lang.org/install.html). ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your Elixir application. ## Create an Elixir project For your Elixir project, create a project directory using `mix` and add the required library. 1. Create a new supervised Elixir project and change into the directory. ```bash mix new neon_elixir_quickstart --sup cd neon_elixir_quickstart ``` > Open this directory in your preferred code editor (e.g., VS Code). 2. Add `postgrex` as a dependency in your `mix.exs` file. Find the `deps` function and add `{:postgrex, "~> 0.18.0"}`: ```elixir defp deps do [ {:postgrex, "~> 0.18.0"} ] end ``` 3. Install the dependency from your terminal: ```bash mix deps.get ``` ## Configure your Neon connection details You'll configure your application to connect to Neon using the `config/config.exs` file. This method securely separates your credentials from your source code. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Select the **Parameters only** tab to view the connection string parameters. 4. Copy the connection string parameters (user, password, host, and database name). 5. Create / Open the `config/config.exs` file and add a configuration block for your project, replacing the placeholder values with your actual database credentials. ```elixir import Config config :neon_elixir_quickstart, username: "[user]", password: "[password]", hostname: "[neon_hostname]", database: "[dbname]", ssl: [cacerts: :public_key.cacerts_get()] ``` > - The `:ssl` option is required to connect securely to Neon. Using `:public_key.cacerts_get()` tells Postgrex to use the OS-provided CA trust store to verify the server's SSL certificate. > - The `:neon_elixir_quickstart` key matches your application's name, allowing you to fetch this configuration from your code. ## Examples This section provides example Elixir scripts that demonstrate how to connect to your Neon database and perform basic operations such as [creating a table](https://neon.com/docs/guides/elixir#create-a-table-and-insert-data), [reading data](https://neon.com/docs/guides/elixir#read-data), [updating data](https://neon.com/docs/guides/elixir#update-data), and [deleting data](https://neon.com/docs/guides/elixir#deleting-data). ### Create a table and insert data In your project's root directory, create a file named `create_table.exs`. This script connects to your Neon database, creates a `books` table, and inserts sample data. ```elixir defmodule CreateTable do def run do # Fetch connection config config = Application.get_all_env(:neon_elixir_quickstart) # Start a connection to the database {:ok, pid} = Postgrex.start_link(config) IO.puts("Connection established") try do # Drop the table if it already exists Postgrex.query!(pid, "DROP TABLE IF EXISTS books;", []) IO.puts("Finished dropping table (if it existed).") # Create a new table Postgrex.query!(pid, """ CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); """, []) IO.puts("Finished creating table.") # Insert a single book record Postgrex.query!( pid, "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);", ["The Catcher in the Rye", "J.D. Salinger", 1951, true] ) IO.puts("Inserted a single book.") # Data to be inserted books_to_insert = [ {"The Hobbit", "J.R.R. Tolkien", 1937, true}, {"1984", "George Orwell", 1949, true}, {"Dune", "Frank Herbert", 1965, false} ] # Prepare a statement for efficient multiple inserts {:ok, statement} = Postgrex.prepare( pid, "insert_books", "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);" ) # Insert multiple books Enum.each(books_to_insert, fn {title, author, year, stock} -> Postgrex.execute!(pid, statement, [title, author, year, stock]) end) IO.puts("Inserted 3 rows of data.") rescue e -> IO.inspect(e, label: "An error occurred") end end end # Run the script CreateTable.run() ``` The above code does the following: - Loads the connection configuration from `config/config.exs`. - Connects to the Neon database using `Postgrex.start_link`. - Drops the `books` table if it already exists to ensure a clean slate. - Creates a table named `books` with columns for `id`, `title`, `author`, `publication_year`, and `in_stock`. - Inserts a single book record using `Postgrex.query!`. - Uses a prepared statement with `Postgrex.prepare` and `Postgrex.execute!` for efficiently inserting multiple records. Run the script using the following command: ```bash mix run create_table.exs ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data In your project directory, create a file named `read_data.exs`. This script connects to your Neon database and retrieves all rows from the `books` table. ```elixir defmodule ReadData do def run do config = Application.get_all_env(:neon_elixir_quickstart) {:ok, pid} = Postgrex.start_link(config) IO.puts("Connection established") try do # Fetch all rows from the books table result = Postgrex.query!(pid, "SELECT * FROM books ORDER BY publication_year;", []) IO.puts("\n--- Book Library ---") for row <- result.rows do [id, title, author, year, in_stock] = row IO.puts( "ID: #{id}, Title: #{title}, Author: #{author}, Year: #{year}, In Stock: #{in_stock}" ) end IO.puts("--------------------\n") rescue e -> IO.inspect(e) end end end ReadData.run() ``` The above code does the following: - Loads the connection configuration and connects to the database. - Uses a SQL `SELECT` statement to fetch all rows from the `books` table, ordered by `publication_year`. - Iterates through the `rows` field of the `Postgrex.Result` struct. - Prints each book's details in a formatted output. Run the script using the following command: ```bash mix run read_data.exs ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: false -------------------- ``` ### Update data In your project directory, create a file named `update_data.exs`. This script connects to your Neon database and updates the stock status of the book 'Dune' to `true`. ```elixir defmodule UpdateData do def run do config = Application.get_all_env(:neon_elixir_quickstart) {:ok, pid} = Postgrex.start_link(config) IO.puts("Connection established") try do # Update a data row in the table Postgrex.query!(pid, "UPDATE books SET in_stock = $1 WHERE title = $2;", [true, "Dune"]) IO.puts("Updated stock status for 'Dune'.") rescue e -> IO.inspect(e) end end end UpdateData.run() ``` The above code does the following: - Loads the connection configuration and connects to the database. - Uses a SQL `UPDATE` statement with parameters to change the `in_stock` status of the book 'Dune' to `true`. Run the script using the following command: ```bash mix run update_data.exs ``` After running this script, you can run `read_data.exs` again to verify that the row was updated. ```bash mix run read_data.exs ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `true`. ### Delete data In your project directory, create a file named `delete_data.exs`. This script connects to your Neon database and deletes the book '1984' from the `books` table. ```elixir defmodule DeleteData do def run do config = Application.get_all_env(:neon_elixir_quickstart) {:ok, pid} = Postgrex.start_link(config) IO.puts("Connection established") try do # Delete a data row from the table Postgrex.query!(pid, "DELETE FROM books WHERE title = $1;", ["1984"]) IO.puts("Deleted the book '1984' from the table.") rescue e -> IO.inspect(e) end end end DeleteData.run() ``` The above code does the following: - Loads the connection configuration and connects to the database. - Uses a SQL `DELETE` statement to remove the book '1984' from the `books` table. Run the script using the following command: ```bash mix run delete_data.exs ``` After running this script, you can run `read_data.exs` again to verify that the row was deleted. ```bash mix run read_data.exs ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Next steps: Using an ORM or framework While this guide demonstrates how to connect to Neon using raw SQL queries, for more advanced and maintainable data interactions in your Elixir applications, consider using an Object-Relational Mapping (ORM) framework. ORMs not only let you work with data as objects but also help manage schema changes through automated migrations keeping your database structure in sync with your application models. Explore the following resources to learn how to integrate ORMs with Neon: - [Connect an Elixir Ecto application to Neon](https://neon.com/docs/guides/elixir-ecto) ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Elixir and Neon using Postgrex](https://github.com/neondatabase/examples/tree/main/with_elixir_postgrex) ## Resources - [Postgrex Documentation](https://hexdocs.pm/postgrex/Postgrex.html) --- # Source: https://neon.com/llms/guides-entity-migrations.txt # Schema migration with Neon Postgres and Entity Framework > The document outlines the process of performing schema migrations using Neon Postgres in conjunction with Entity Framework, detailing steps for setting up and executing migrations within a Neon database environment. ## Source - [Schema migration with Neon Postgres and Entity Framework HTML](https://neon.com/docs/guides/entity-migrations): The original HTML version of this documentation [Entity Framework](https://learn.microsoft.com/en-us/ef/) is a popular Object-Relational Mapping (ORM) framework for .NET applications. It simplifies database access by allowing developers to work with domain-specific objects and properties without focusing on the underlying database tables and columns. Entity Framework also provides a powerful migration system that enables you to define and manage database schema changes over time. This guide demonstrates how to use Entity Framework with the Neon Postgres database. We'll create a simple .NET application and walk through the process of setting up the database, defining models, and generating and running migrations to manage schema changes. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A recent version of the [.NET SDK](https://dotnet.microsoft.com/en-us/download/dotnet) installed on your local machine. This guide uses .NET 8.0, which is the current Long-Term Support (LTS) version. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the **New Project** button to create a new one. ### Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should appear similar to the following: ```bash postgresql://username:password@hostname/dbname?sslmode=require&channel_binding=require ``` The Postgres client library we use in this guide requires the connection string to be in the following format: ```bash Host=hostname;Port=5432;Database=dbname;Username=username;Password=password;SSLMode=Require ``` Construct the connection string in this format using the correct values for your Neon connection URI. Keep it handy for later use. **Note**: Neon supports both direct and pooled database connection strings, which you can find by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting up the Entity Framework project ### Create a new .NET project Open your terminal and run the following command to create a new .NET console application: ```bash dotnet new console -o guide-neon-entityframework cd guide-neon-entityframework ``` ### Install dependencies Run the following commands to install the necessary NuGet packages: ```bash dotnet add package Microsoft.EntityFrameworkCore dotnet add package Microsoft.EntityFrameworkCore.Design dotnet add Microsoft.AspNetCore.App dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL dotnet add package dotenv.net ``` These packages include the Entity Framework Core libraries, the design-time components for migrations, and the Npgsql provider for PostgreSQL. We will also need the `EF Core` tools to generate and run migrations. Install the `dotnet-ef` tool globally: ```bash dotnet tool install --global dotnet-ef ``` ### Set up the database configuration Create a new file named `.env` in the project root directory and add the following configuration: ```bash DATABASE_URL=NEON_POSTGRES_CONNECTION_STRING ``` Replace `NEON_POSTGRES_CONNECTION_STRING` with the **formatted** connection string you constructed earlier. ## Defining data models and running migrations ### Create the data models Create a new file named `Models.cs` in the project directory and define the data models for your application: ```csharp # Models.cs using System; using Microsoft.EntityFrameworkCore; namespace NeonEFMigrations { public class Author { public int Id { get; set; } public string Name { get; set; } public string Bio { get; set; } public DateTime CreatedAt { get; set; } } public class Book { public int Id { get; set; } public string Title { get; set; } public int AuthorId { get; set; } public Author Author { get; set; } public DateTime CreatedAt { get; set; } } } ``` This code defines two entities: `Author` and `Book`. The `Author` entity represents an author with properties for name, bio, and created timestamp. The `Book` entity represents a book with properties for title, author (as a foreign key to the `Author` entity), and created timestamp. Also, create a new file named `ApplicationDbContext.cs` in the project directory and add the following code: ```csharp # ApplicationDbContext.cs using Microsoft.EntityFrameworkCore; using GuideNeonEF.Models; using dotenv.net; namespace GuideNeonEF { public class ApplicationDbContext : DbContext { protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { if (!optionsBuilder.IsConfigured) { DotEnv.Load(); optionsBuilder.UseNpgsql(Environment.GetEnvironmentVariable("DATABASE_URL")); } } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity() .Property(a => a.CreatedAt) .HasDefaultValueSql("Now()"); modelBuilder.Entity() .Property(b => b.CreatedAt) .HasDefaultValueSql("Now()"); modelBuilder.Seed(); } public DbSet Authors { get; set; } public DbSet Books { get; set; } } } ``` The `ApplicationDbContext` class derives from `DbContext` and represents the database context. It includes the method where we configure the database connection and seed the database at initialization. We also set default values for the `CreatedAt` properties of the `Author` and `Book` entities. ### Add seeding script To seed the database with some initial data, create another script named `ModelBuilderExtensions.cs` in the project directory and add the following code: ```csharp # ModelBuilderExtensions.cs using Microsoft.EntityFrameworkCore; using GuideNeonEF.Models; namespace GuideNeonEF { public static class ModelBuilderExtensions { public static void Seed(this ModelBuilder modelBuilder) { var authors = new[] { new Author { Id = 1, Name = "J.R.R. Tolkien", Bio = "The creator of Middle-earth and author of The Lord of the Rings.", Country = "United Kingdom"}, new Author { Id = 2, Name = "George R.R. Martin", Bio = "The author of the epic fantasy series A Song of Ice and Fire.", Country = "United States"}, new Author { Id = 3, Name = "J.K. Rowling", Bio = "The creator of the Harry Potter series.", Country = "United Kingdom"} }; modelBuilder.Entity().HasData(authors); var books = new[] { new Book { Id = 1, Title = "The Fellowship of the Ring", AuthorId = 1 }, new Book { Id = 2, Title = "The Two Towers", AuthorId = 1 }, new Book { Id = 3, Title = "The Return of the King", AuthorId = 1 }, new Book { Id = 4, Title = "A Game of Thrones", AuthorId = 2 }, new Book { Id = 5, Title = "A Clash of Kings", AuthorId = 2 }, new Book { Id = 6, Title = "Harry Potter and the Philosopher's Stone", AuthorId = 3 }, new Book { Id = 7, Title = "Harry Potter and the Chamber of Secrets", AuthorId = 3 } }; modelBuilder.Entity().HasData(books); } } } ``` This code defines a static method `Seed` that populates the database with some initial authors and books. Entity framework will include this data when generating database migrations. ### Generate migration files To generate migration files based on the defined models, run the following command: ```bash dotnet ef migrations add InitialCreate ``` This command detects the new `Author` and `Book` entities and generates migration files in the `Migrations` directory to create the corresponding tables in the database. ### Apply the migration To apply the migration and create the tables in the Neon Postgres database, run the following command: ```bash dotnet ef database update ``` This command executes the migration file and creates the necessary tables in the database. It will also seed the database with the initial data defined in the `Seed` method. ## Creating the web application ### Implement the API endpoints The project directory has a `Program.cs` file that contains the application entry point. Replace the contents of this file with the following code: ```csharp # Program.cs using Microsoft.EntityFrameworkCore; using Microsoft.AspNetCore.Builder; using Microsoft.Extensions.DependencyInjection; using GuideNeonEF; var builder = WebApplication.CreateBuilder(args); builder.Services.AddDbContext(); var app = builder.Build(); app.UseRouting(); app.MapGet("/authors", async (ApplicationDbContext db) => await db.Authors.ToListAsync()); app.MapGet("/books/{authorId}", async (int authorId, ApplicationDbContext db) => await db.Books.Where(b => b.AuthorId == authorId).ToListAsync()); app.Run(); ``` This code sets up a simple web application with two endpoints: `/authors` and `/books/[authorId]`. The `/authors` endpoint returns a list of all authors, while the `/books/[authorId]` endpoint returns a list of books written by the author with the specified ID. ### Test the application To test the application, run the following command: ```bash dotnet run ``` This will start a local web server at `http://localhost:5000`. Navigate to these endpoints in your browser to view the seeded data. ```bash curl http://localhost:5000/authors curl http://localhost:5000/books/1 ``` ## Applying schema changes We'll see how to handle schema changes by adding a new property `Country` to the `Author` entity to store the author's country of origin. ### Update the data model Open the `Models.cs` file and add a new property to the `Author` entity: ```csharp # Models.cs public class Author { public int Id { get; set; } public string Name { get; set; } public string Bio { get; set; } public DateTime CreatedAt { get; set; } public string Country { get; set; } } ``` Also, update the seed data entries for the `Author` model to include the `Country` property: ```csharp # ModelBuilderExtensions.cs namespace GuideNeonEF { public static class ModelBuilderExtensions { public static void Seed(this ModelBuilder modelBuilder) { var authors = new[] { new Author { Id = 1, Name = "J.R.R. Tolkien", Bio = "The creator of Middle-earth and author of The Lord of the Rings.", Country = "United Kingdom" }, new Author { Id = 2, Name = "George R.R. Martin", Bio = "The author of the epic fantasy series A Song of Ice and Fire.", Country = "United States" }, new Author { Id = 3, Name = "J.K. Rowling", Bio = "The creator of the Harry Potter series.", Country = "United Kingdom" } }; modelBuilder.Entity().HasData(authors); ... } } } ``` ### Generate and run the migration To generate a new migration file for the above schema change, run the following command in the terminal: ```bash dotnet ef migrations add AddCountryToAuthor ``` This command detects the updated `Author` entity and generates a new migration file to add the new column to the corresponding table in the database. It will also include upserting the seed data with the new property added. Now, to apply the migration, run the following command: ```bash dotnet ef database update ``` ### Test the schema change Run the application again: ```bash dotnet run ``` Now, if you navigate to the `/authors` endpoint, you should see the new `Country` property included in the response. ```bash curl http://localhost:5000/authors ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Entity Framework](https://github.com/neondatabase/guide-neon-entityframework): Run Neon database migrations in an Entity Framework project ## Conclusion In this guide, we demonstrated how to set up an Entity Framework project with Neon Postgres, define data models, generate migrations, and run them. Entity Framework's migration system make it easy to interact with the database and manage schema evolution over time. ## Resources For more information on the tools and concepts used in this guide, refer to the following resources: - [Entity Framework Core Documentation](https://learn.microsoft.com/en-us/ef/core/) - [Neon Postgres](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/guides-exograph.txt # Use Exograph with Neon > The document explains how to integrate Exograph with Neon, detailing the steps for setting up and configuring Exograph to work with Neon's database services. ## Source - [Use Exograph with Neon HTML](https://neon.com/docs/guides/exograph): The original HTML version of this documentation _This guide was contributed by the Exograph team_ [Exograph](https://exograph.dev) is a new approach to building GraphQL backends. With it, you can effortlessly create flexible, secure, high-performing GraphQL backends in minutes. Powered by a Rust-based runtime, Exograph ensures fast startup times, efficient execution, and minimal memory consumption. Exograph comes equipped with a comprehensive set of tools designed to support every stage of the development lifecycle: from initial development to deployment to ongoing maintenance. Exograph supports Postgres for data persistence, which makes it a great fit to use with Neon. ## Prerequisites - Exograph CLI. See [Install Exograph](https://exograph.dev/docs/getting-started). - A Neon project. See [Create a Neon project](https://neon.com/docs/manage/projects#create-a-project). ## Create a backend with Exograph Let's create a starter project with Exograph. Run the following commands: ```bash exo new todo cd todo ``` You can check the code it created by examining the `src/index.exo` file (which has a definition for the `Todo` type). If you would like, you can try the [yolo](https://exograph.dev/docs/cli-reference/development/yolo) mode by trying the `exo yolo` command. Next, let's set up the Neon database. ## Create the schema in Neon 1. Navigate to the Neon Console, select your project, and copy the connection string, which will look something like this: `postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require`. 2. Create schema in Neon using Exograph CLI: ```bash exo schema create | psql ``` ## Launch the backend ```bash EXO_POSTGRES_URL= exo dev ``` It will print the necessary information for connecting to the backend. ```raw Starting server in development mode... Watching the src directory for changes... Verifying new model... Started server on 0.0.0.0:9876 in 717.19 ms - Playground hosted at: http://0.0.0.0:9876/playground - Endpoint hosted at: http://0.0.0.0:9876/graphql ``` That's it! You can now open [http://localhost:9876/playground](http://localhost:9876/playground) in your browser to see the GraphQL Playground. You can create a todo by running the following mutation: ```graphql mutation { createTodo(data: { title: "Set up Exograph with Neon", completed: true }) { id } } ``` To get all todos, try the following query: ```graphql query { todos { id title completed } } ``` And you should see the todo you just added. Please follow Exograph's [guide to creating a simple application](https://exograph.dev/docs/getting-started#creating-a-simple-application) for more details. ## Learn more In this guide, we have created a basic todo backend using Exograph and Neon. You can extend this further by establishing relationships between types, implementing access control rules, and integrating custom business logic. Check out Exograph's [application tutorial](https://exograph.dev/docs/application-tutorial) for more details. To deploy Exograph in the cloud and connect it to Neon, follow the guide below (select the "External Database" tab for Neon-specific instructions in each case): 1. Deploying on [Fly.io](https://exograph.dev/docs/deployment/flyio) (these instructions can be adapted to other cloud providers) 2. Deploying on [AWS Lambda](https://exograph.dev/docs/deployment/aws-lambda) --- # Source: https://neon.com/llms/guides-express.txt # Connect an Express application to Neon > This document details the steps to connect an Express application to a Neon database, including configuring the database connection and using environment variables for secure credential management. ## Source - [Connect an Express application to Neon HTML](https://neon.com/docs/guides/express): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from an Express application. Examples are provided for using the [Neon serverless driver](https://npmjs.com/package/@neondatabase/serverless), [node-postgres](https://www.npmjs.com/package/pg) and [Postgres.js](https://www.npmjs.com/package/postgres) clients. Use the client you prefer. To connect to Neon from an Express application: ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create an Express project and add dependencies 1. Create an Express project and change to the newly created directory. ```shell mkdir neon-express-example cd neon-express-example npm init -y npm install express ``` 2. Add project dependencies using one of the following commands: Tab: Neon serverless driver ```shell npm install @neondatabase/serverless dotenv ``` Tab: node-postgres ```shell npm install pg dotenv ``` Tab: postgres.js ```shell npm install postgres dotenv ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection details to it. Find your database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select Node.js from the **Connection string** dropdown. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` **Important**: To ensure the security of your data, never expose your Neon credentials to the browser. ## Configure the Postgres client Add an `index.js` file to your project directory and add the following code snippet to connect to your Neon database: Tab: Neon serverless driver ```javascript require('dotenv').config(); const express = require('express'); const { neon } = require('@neondatabase/serverless'); const app = express(); const PORT = process.env.PORT || 4242; app.get('/', async (_, res) => { const sql = neon(`${process.env.DATABASE_URL}`); const response = await sql`SELECT version()`; const { version } = response[0]; res.json({ version }); }); app.listen(PORT, () => { console.log(`Listening to http://localhost:${PORT}`); }); ``` Tab: node-postgres ```javascript require('dotenv').config(); const { Pool } = require('pg'); const express = require('express'); const app = express(); const PORT = process.env.PORT || 4242; app.get('/', async (_, res) => { const pool = new Pool({ connectionString: process.env.DATABASE_URL, }); const client = await pool.connect(); const result = await client.query('SELECT version()'); client.release(); const { version } = result.rows[0]; res.json({ version }); }); app.listen(PORT, () => { console.log(`Listening to http://localhost:${PORT}`); }); ``` Tab: postgres.js ```javascript require('dotenv').config(); const express = require('express'); const postgres = require('postgres'); const app = express(); const PORT = process.env.PORT || 4242; app.get('/', async (_, res) => { const sql = postgres(`${process.env.DATABASE_URL}`); const response = await sql`SELECT version()`; const { version } = response[0]; res.json({ version }); }); app.listen(PORT, () => { console.log(`Listening to http://localhost:${PORT}`); }); ``` ## Run index.js Run `node index.js` to view the result on [localhost:4242](localhost:4242) as follows: ```shell { version: 'PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit' } ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Express and Neon](https://github.com/neondatabase/examples/tree/main/with-express) --- # Source: https://neon.com/llms/guides-file-storage.txt # File storage > The "File Storage" documentation outlines the procedures for managing file storage within the Neon database environment, detailing configuration, usage, and integration specifics for effective data handling. ## Source - [File storage HTML](https://neon.com/docs/guides/file-storage): The original HTML version of this documentation Applications often need to handle file uploads and storage, from user avatars and documents to images and other media. Neon does not yet provide a native file storage solution. Instead, we recommend combining Neon with a specialized storage service. The typical pattern looks like this: 1. Upload files from your application (client or backend) to an object storage provider or file management service. 2. Store references—such as the file URL, unique key, or identifier—and related metadata like user ID, upload timestamp, file type, size, and permissions in your Neon Postgres database. This pattern separates file storage from relational data management, with purpose-built services like S3 or R2 handling file storage and Neon managing your data. ## Options for external storage You can integrate Neon with a variety of storage solutions: - S3-compatible object storage: Services like [AWS S3](https://aws.amazon.com/pm/serv-s3/), [Cloudflare R2](https://www.cloudflare.com/en-in/developer-platform/products/r2/), and [Backblaze B2](https://www.backblaze.com/cloud-storage) offer file storage via the widely-adopted S3 API. - File and media management SaaS platforms: Services like [ImageKit](https://imagekit.io/), [Cloudinary](https://cloudinary.com/), [Uploadcare](https://uploadcare.com/) or [Filestack](https://www.filestack.com/) provide higher-level abstractions, often including additional features like image optimization, transformations, and SDKs, while managing the underlying storage infrastructure for you. - [AWS S3](https://neon.com/docs/guides/aws-s3): Upload files to AWS S3 and store metadata in Neon - [Azure Blob Storage](https://neon.com/docs/guides/azure-blob-storage): Upload files to Azure Blob Storage and store metadata in Neon - [Backblaze B2](https://neon.com/docs/guides/backblaze-b2): Upload files to Backblaze B2 and store metadata in Neon - [Cloudflare R2](https://neon.com/docs/guides/cloudflare-r2): Upload files to Cloudflare R2 and store metadata in Neon - [Cloudinary](https://neon.com/docs/guides/cloudinary): Upload files to Cloudinary and store metadata in Neon - [ImageKit](https://neon.com/docs/guides/imagekit): Upload files to ImageKit and store metadata in Neon - [Uploadcare](https://neon.com/docs/guides/uploadcare): Upload files to Uploadcare and store metadata in Neon --- # Source: https://neon.com/llms/guides-flyway-multiple-environments.txt # Manage multiple database environments > The document outlines how to manage multiple database environments using Flyway with Neon, detailing steps for setting up, configuring, and deploying migrations across different environments. ## Source - [Manage multiple database environments HTML](https://neon.com/docs/guides/flyway-multiple-environments): The original HTML version of this documentation With Flyway, you can manage and track changes to your database schema, ensuring that the database evolves consistently across different environments. When automating releases, there are often multiple environments or a chain of environments that you must deliver changes to in a particular order. Such environments might include _development_, _staging_, and _production_. In this guide, we'll show you how to use Neon's branching feature to spin up a branch for each environment and how to configure Flyway to manage schema changes across those environments. ## Prerequisites - A flyway installation. See [Get started with Flyway and Neon](https://neon.com/docs/guides/flyway) for installation instructions. - A Neon account and project. See [Sign up](https://neon.com/docs/get-started/signing-up). - A database. This guide uses the ready-to-use `neondb` database on the `main` branch of your Neon project. You can create your own database if you like. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for instructions. ## Add a table to your database Set up a database to work with by adding a table to your `neondb` database on the `main` branch of your Neon project. If you completed [Get started with Flyway and Neon](https://neon.com/docs/guides/flyway), you might already have this `person` table created. We'll consider this your _production_ environment database. If you still need to create the `person` table, open the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), and run the following statement: ```bash create table person ( ID int not null, NAME varchar(100) not null ); ``` ## Create databases for development and staging Using Neon's branching feature, create your _development_ and _staging_ databases. When you create a branch in Neon, you are creating a copy-on-write clone of the parent branch that incudes all databases and roles that exist on the parent, and each branch is an isolated Postgres instance with its own compute resources. Perform these steps twice, once for your _development_ branch and once for your _staging_ branch. Tab: Console 1. In the Neon Console, select your project. 2. Select **Branches**. 3. Click **New Branch** to open the branch creation dialog. 4. Enter a name for the branch. For example, name the branch for the environment (_development_ or _staging_). 5. Select a parent branch. This should be the branch where you created the `person` table. 6. Leave the other default settings and click **Create Branch**. Tab: CLI ```bash showLineNumbers neon branches create --name development ``` Tab: API ```bash showLineNumbers curl --request POST \ --url https://console.neon.tech/api/v2/projects/{project_id}/branches \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API" \ --header 'Content-Type: application/json' \ --data ' { "branch": { "name": "development" }, "endpoints": [ { "type": "read_only" } ] } ' | jq ``` When you are finished, you should have a _development_ branch and a _staging_ branch. ## Retrieve your Neon database connection strings From the Neon **Dashboard**, click **Connect** to retrieve the connection string for each branch (`main`, `development`, and `staging`) from the **Connect to your database** modal. Use the **Branch** drop-down menu to select each branch before copying the connection string. Your connection strings should look something like the ones shown below. Note that the hostname differs for each (the part starting with `ep-` and ending with `aws.neon.tech`). That's because each branch is hosted on its own compute. - **main** ```bash jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?user=alex&password=AbC123dEf ``` - **development** ```bash jdbc:postgresql://ep-mute-night-47642501.us-east-2.aws.neon.tech/neondb?user=alex&password=AbC123dEf ``` - **staging** ```bash jdbc:postgresql://ep-shrill-shape-27763949.us-east-2.aws.neon.tech/neondb?user=alex&password=AbC123dEf ``` ## Configure flyway to connect each environment To enable Flyway to connect to multiple environments, we'll create a configuration file for each environment and add the environment-specific connection details. When running Flyway, you'll specify the configuration file to be used. **Note**: By default, Flyway loads its configuration from the default `conf/flyway.conf` file. This is true even if you specify another configuration file when running Flyway. You can take advantage of this behavior by defining non-environment specific configuration settings in the default `conf/flyway.conf` file, and placing your environment-specific settings in separate configuration files, as we'll do here. 1. Switch to your Flyway `/conf` directory and create the following configuration files, one for each environment, by copying the default configuration file. For example: ```bash cd ~/flyway-x.y.z/conf cp flyway.conf env_dev.conf cp flyway.conf env_staging.conf cp flyway.conf env_prod.conf ``` 2. In each configuration file, update the following items with the correct connection details for that database environment. The `url` setting will differ for each environment (in `env_prod.conf`, the `url` will point to `main`). In this example, where you are the only user, the `user` and `password` settings should be the same for each of your three database environments. ```bash flyway.url=jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432/neondb flyway.user=alex flyway.password=AbC123dEf flyway.locations=filesystem:/home/alex/flyway-x.y.z/sql flyway.baselineOnMigrate=true ``` - The `flyway.locations` setting tells Flyway where to look for your migration files. We'll create them in the `/sql` directory in a later step. - The `flyway.baselineOnMigrate=true` setting tells Flyway to perform a baseline action when you run the `migrate` command on a non-empty schema with no Flyway schema history table. The schema will then be initialized with the `baselineVersion` before executing migrations. Only migrations above the `baselineVersion` will then be applied. This is useful for initial Flyway deployments on projects with an existing database. You can disable this setting by commenting it out again or setting it to false after applying your first migration on the database. ## Create a migration Create a migration file called `V2__Add_people.sql`, add it to your Flyway `/sql` directory, and add the following statements to the file: ```bash insert into person (ID, NAME) values (1, 'Alex'); insert into person (ID, NAME) values (2, 'Mr. Lopez'); insert into person (ID, NAME) values (3, 'Ms. Smith'); ``` ### Run the migration on each environment Run the migration on each environment in order by specifying the environment's configuration file in the `flyway migrate` command. You'll start with your `development` environment, then `staging`, and then finally, `production`. Tab: Development ```bash showLineNumbers flyway migrate -configFiles="conf/env_dev.conf" ``` Tab: Staging ```bash showLineNumbers flyway migrate -configFiles="conf/env_staging.conf" ``` Tab: Production ```bash showLineNumbers flyway migrate -configFiles="conf/env_prod.conf" ``` A successful migration command returns output similar to the following: ```bash Database: jdbc:postgresql://ep-nameless-unit-49929920.us-east-2.aws.neon.tech/neondb (PostgreSQL 15.4) Schema history table "public"."flyway_schema_history" does not exist yet Successfully validated 1 migration (execution time 00:00.199s) Creating Schema History table "public"."flyway_schema_history" with baseline ... Successfully baselined schema with version: 1 Current version of schema "public": 1 Migrating schema "public" to version "2 - Add people" Successfully applied 1 migration to schema "public", now at version v2 (execution time 00:00.410s) A Flyway report has been generated here: /home/alex/flyway-x.y.z/report.html ``` After you run the migration commands, your database should be consistent across all three environments. You can verify that the data was added to each database by viewing the branch and table on the **Tables** page in the Neon Console. Select **Tables** from the sidebar and select your database. ## Conclusion You've seen how you can instantly create new database environment with Neon's branching feature and how to keep schemas consistent across different environments using Flyway. The steps in this guide were performed manually from the command line but could be easily integrated into your release management pipeline. Neon provides a [CLI](https://neon.com/docs/reference/neon-cli) and [API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) for automating various tasks in Neon, such as branch creation, which you can also integrate into your release automation. ## References - [Flyway documentation](https://documentation.red-gate.com/fd/flyway-documentation-138346877.html) - [Flyway command-line tool](https://documentation.red-gate.com/fd/command-line-184127404.html) - [Flyway command-line quickstart](https://documentation.red-gate.com/fd/quickstart-command-line-184127576.html) - [A simple way to manage multi-environment deployments](https://flywaydb.org/blog/a-simple-way-to-manage-multi-environment-deployments) --- # Source: https://neon.com/llms/guides-flyway.txt # Get started with Flyway and Neon > The document guides Neon users on integrating Flyway for database version control, detailing steps to configure and execute Flyway migrations within the Neon environment. ## Source - [Get started with Flyway and Neon HTML](https://neon.com/docs/guides/flyway): The original HTML version of this documentation Flyway is a database migration tool that facilitates version control for databases. It allows developers to manage and track changes to the database schema, ensuring that the database evolves consistently across different environments. This guide steps you through installing the Flyway command-line tool, configuring Flyway to connect to a Neon database, and running database migrations. The guide follows the setup described in the [Flyway command-line quickstart](https://documentation.red-gate.com/fd/quickstart-command-line-184127576.html). ## Prerequisites - A Neon account. See [Sign up](https://neon.com/docs/get-started/signing-up). - A Neon project. See [Create your first project](https://neon.com/docs/get-started/setting-up-a-project). - A database. This guide uses the ready-to-use `neondb` database. You can create your own database if you like. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for instructions. ## Download and extract Flyway 1. Download the latest version of the [Flyway command-line tool](https://documentation.red-gate.com/fd/command-line-277579359.html). 2. Extract the Flyway files. For example: ```bash cd ~/Downloads tar -xzvf flyway-commandline-x.y.z-linux-x64.tar.gz -C ~/ ``` 3. Open a command prompt to view the contents of your Flyway installation: ```bash cd ~/flyway-x.y.z ls assets drivers flyway.cmd jre licenses rules conf flyway jars lib README.txt sql ``` ## Set your path variable Add the Flyway directory to your `PATH` so that you can execute Flyway commands from any location. Tab: bash ```bash echo 'export PATH=$PATH:~/flyway-x.y.z' >> ~/.bashrc source ~/.bashrc ``` Tab: zsh ```zsh echo 'export PATH=$PATH:~/flyway-x.y.x' >> ~/.zshrc source ~/.zshrc ``` ## Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select the **Java** option from the **Connection string** drop-down menu. Your Java connection string should look something like this: ```bash jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?user=alex&password=AbC123dEf ``` ## Configure flyway To configure Flyway to connect to your Neon database, create a `flyway.conf` file in the /conf directory. Include the following items, modified to use the connection details you retrieved in the previous step. ```bash flyway.url=jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432/neondb flyway.user=alex flyway.password=AbC123dEf flyway.locations=filesystem:/home/alex/flyway-x.y.z/sql ``` ## Create the first migration Create an `sql` directory to hold your first migration file. We'll name the file `V1__Create_person_table.sql` and include the following command, which creates a person table in your database. ```bash create table person ( ID int not null, NAME varchar(100) not null ); ``` ## Migrate the database Run the `flyway migrate` command to migrate your database: ```bash flyway migrate ``` If the command was successful, you'll see output similar to the following: ```bash Database: jdbc:sqlite:FlywayQuickStartCLI.db (SQLite 3.41) Successfully validated 1 migration (execution time 00:00.008s) Creating Schema History table: "PUBLIC"."flyway_schema_history" Current version of schema "PUBLIC": << Empty Schema >> Migrating schema "PUBLIC" to version 1 - Create person table Successfully applied 1 migration to schema "PUBLIC" (execution time 00:00.033s) ``` To verify that the `person` table was created, you can view it on the **Tables** page in the Neon Console. Select **Tables** from the sidebar and select your database. ## Add a second migration Run another migration to add data to the table. Add a second migration file to the `/sql` directory called `V2__Add_people.sql` and add the following statements: ```bash insert into person (ID, NAME) values (1, 'Alex'); insert into person (ID, NAME) values (2, 'Mr. Lopez'); insert into person (ID, NAME) values (3, 'Ms. Smith'); ``` Run the migration: ```bash flyway migrate ``` If the command was successful, you'll see output similar to the following: ```bash Database: jdbc:postgresql://ep-red-credit-85617375.us-east-2.aws.neon.tech/neondb (PostgreSQL 15.4) Successfully validated 2 migrations (execution time 00:00.225s) Current version of schema "public": 1 Migrating schema "public" to version "2 - Add people" Successfully applied 1 migration to schema "public", now at version v2 (execution time 00:00.388s) A Flyway report has been generated here: /home/alex/flyway-x.y.z/sql/report.html ``` You can verify that the data was added by viewing the table on the **Tables** page in the Neon Console. Select **Tables** from the sidebar and select your database. ## View your schema migration history When you run the `flyway migrate` command, Flyway registers the schema changes in the `flyway_schema_history` table, which Flyway automatically creates in your database. You can view the table by running the [flyway info](https://documentation.red-gate.com/fd/info-277578881.html) command. ```bash flyway info Database: jdbc:postgresql://ep-red-credit-85617375.us-east-2.aws.neon.tech/neondb (PostgreSQL 15.4) Schema version: 2 +-----------+---------+---------------------+------+---------------------+---------+----------+ | Category | Version | Description | Type | Installed On | State | Undoable | +-----------+---------+---------------------+------+---------------------+---------+----------+ | Versioned | 1 | Create person table | SQL | 2023-10-22 19:00:39 | Success | No | | Versioned | 2 | Add people | SQL | 2023-10-22 19:04:42 | Success | No | +-----------+---------+---------------------+------+---------------------+---------+----------+ A Flyway report has been generated here: /home/alex/flyway-x.y.z/sql/report.html ``` You can also view the table on the **Tables** page in the Neon Console. Select **Tables** from the sidebar and select your database. ## Next steps Learn how you can use Flyway with multiple database environments. See [Use Flyway with multiple database environments](https://neon.com/docs/guides/flyway-multiple-environments). ## References - [Flyway documentation](https://documentation.red-gate.com/fd/flyway-documentation-138346877.html) - [Flyway command-line tool](https://documentation.red-gate.com/fd/command-line-184127404.html) - [Flyway command-line quickstart](https://documentation.red-gate.com/fd/quickstart-command-line-184127576.html) --- # Source: https://neon.com/llms/guides-go.txt # Connect a Go application to Neon Postgres > The document details the steps required to connect a Go application to a Neon database, including configuring the database connection and using the Go programming language to interact with Neon. ## Source - [Connect a Go application to Neon Postgres HTML](https://neon.com/docs/guides/go): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Go (Golang) application using [pgx](https://github.com/jackc/pgx), a high-performance and feature-rich PostgreSQL driver for Go. You'll learn how to connect to your Neon database from a Go application, and perform basic Create, Read, Update, and Delete (CRUD) operations. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - Go 1.18 or later. If you do not have Go installed, see the [official installation guide](https://go.dev/doc/install). ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your Go application. ## Create a Go project For your Go project, create a project directory, initialize a Go module, and add the required libraries. 1. Create a project directory and change into it. ```bash mkdir neon-go-quickstart cd neon-go-quickstart ``` > Open the directory in your preferred code editor (e.g., VS Code, GoLand). 2. Initialize a Go module. This command creates a `go.mod` file to track your project's dependencies. ```bash go mod init neon-go-quickstart ``` 3. Add the required Go packages using `go get`. - `pgx/v5`: The database driver for connecting to Postgres. - `godotenv`: A helper library to manage environment variables from a `.env` file. ```bash go get github.com/jackc/pgx/v5 github.com/joho/godotenv ``` This will download the packages and add them to your `go.mod` and `go.sum` files. ## Store your Neon connection string Create a file named `.env` in your project's root directory. This file will securely store your database connection string. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Copy the connection string, which includes your password. 4. Add the connection string to your `.env` file as shown below. ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` > Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your actual database credentials. ## Examples This section provides example Go scripts that demonstrate how to connect to your Neon database and perform basic operations such as [creating a table](https://neon.com/docs/guides/go#create-a-table-and-insert-data), [reading data](https://neon.com/docs/guides/go#read-data), [updating data](https://neon.com/docs/guides/go#update-data), and [deleting data](https://neon.com/docs/guides/go#deleting-data). ### Create a table and insert data In your project directory, create a file named `create_table.go`. This script connects to your Neon database, creates a table named `books`, and inserts some sample data into it. ```go package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" "github.com/joho/godotenv" ) func main() { // Load environment variables from .env file err := godotenv.Load() if err != nil { fmt.Fprintf(os.Stderr, "Error loading .env file: %v\n", err) os.Exit(1) } // Get the connection string from the environment variable connString := os.Getenv("DATABASE_URL") if connString == "" { fmt.Fprintf(os.Stderr, "DATABASE_URL not set\n") os.Exit(1) } ctx := context.Background() // Connect to the database conn, err := pgx.Connect(ctx, connString) if err != nil { fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) os.Exit(1) } defer conn.Close(ctx) fmt.Println("Connection established") // Drop the table if it already exists _, err = conn.Exec(ctx, "DROP TABLE IF EXISTS books;") if err != nil { fmt.Fprintf(os.Stderr, "Unable to drop table: %v\n", err) os.Exit(1) } fmt.Println("Finished dropping table (if it existed).") // Create a new table _, err = conn.Exec(ctx, ` CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); `) if err != nil { fmt.Fprintf(os.Stderr, "Unable to create table: %v\n", err) os.Exit(1) } fmt.Println("Finished creating table.") // Insert a single book record _, err = conn.Exec(ctx, "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);", "The Catcher in the Rye", "J.D. Salinger", 1951, true, ) if err != nil { fmt.Fprintf(os.Stderr, "Unable to insert single row: %v\n", err) os.Exit(1) } fmt.Println("Inserted a single book.") // Data to be inserted booksToInsert := [][]interface{}{ {"The Hobbit", "J.R.R. Tolkien", 1937, true}, {"1984", "George Orwell", 1949, true}, {"Dune", "Frank Herbert", 1965, false}, } // Use CopyFrom for efficient bulk insertion copyCount, err := conn.CopyFrom( ctx, pgx.Identifier{"books"}, []string{"title", "author", "publication_year", "in_stock"}, pgx.CopyFromRows(booksToInsert), ) if err != nil { fmt.Fprintf(os.Stderr, "Unable to copy rows: %v\n", err) os.Exit(1) } fmt.Printf("Inserted %d rows of data.\n", copyCount) } ``` The above code does the following: - Loads the connection string from the `.env` file using the `godotenv` library. - Connects to the Neon database using `pgx.Connect`. The `defer conn.Close(ctx)` statement ensures the connection is closed when the `main` function exits. - Uses `conn.Exec` to run SQL commands that don't return rows, such as `DROP TABLE` and `CREATE TABLE`. - Inserts a single row using `conn.Exec` with parameterized query placeholders (`$1`, `$2`, etc.) to prevent SQL injection. - Uses `conn.CopyFrom` for efficient bulk insertion of multiple records. Run the script using the following command: ```bash go run create_table.go ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data In your project directory, create a file named `read_data.go`. This script connects to your Neon database and retrieves all rows from the `books` table. ```go package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" "github.com/joho/godotenv" ) func main() { err := godotenv.Load() if err != nil { fmt.Fprintf(os.Stderr, "Error loading .env file: %v\n", err) os.Exit(1) } connString := os.Getenv("DATABASE_URL") if connString == "" { fmt.Fprintf(os.Stderr, "DATABASE_URL not set\n") os.Exit(1) } ctx := context.Background() conn, err := pgx.Connect(ctx, connString) if err != nil { fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) os.Exit(1) } defer conn.Close(ctx) fmt.Println("Connection established") // Fetch all rows from the books table rows, err := conn.Query(ctx, "SELECT * FROM books ORDER BY publication_year;") if err != nil { fmt.Fprintf(os.Stderr, "Query failed: %v\n", err) os.Exit(1) } defer rows.Close() fmt.Println("\n--- Book Library ---") for rows.Next() { var id, publicationYear int var title, author string var inStock bool err := rows.Scan(&id, &title, &author, &publicationYear, &inStock) if err != nil { fmt.Fprintf(os.Stderr, "Failed to scan row: %v\n", err) os.Exit(1) } fmt.Printf("ID: %d, Title: %s, Author: %s, Year: %d, In Stock: %t\n", id, title, author, publicationYear, inStock) } fmt.Println("--------------------\n") if err := rows.Err(); err != nil { fmt.Fprintf(os.Stderr, "Error during rows iteration: %v\n", err) os.Exit(1) } } ``` The above code does the following: - Connects to the database using the connection string from the `.env` file. - Uses `conn.Query` to execute a `SELECT` statement, which returns a `pgx.Rows` object. - Iterates through the rows using `rows.Next()`. - Uses `rows.Scan()` to copy the column values from the current row into Go variables. - Prints each book's details in a formatted output. - Checks for any errors that occurred during row iteration with `rows.Err()`. Run the script using the following command: ```bash go run read_data.go ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: false -------------------- ``` ### Update data In your project directory, create a file named `update_data.go`. This script connects to your Neon database and updates the stock status of the book 'Dune' to `true`. ```go package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" "github.com/joho/godotenv" ) func main() { err := godotenv.Load() if err != nil { fmt.Fprintf(os.Stderr, "Error loading .env file: %v\n", err) os.Exit(1) } connString := os.Getenv("DATABASE_URL") if connString == "" { fmt.Fprintf(os.Stderr, "DATABASE_URL not set\n") os.Exit(1) } ctx := context.Background() conn, err := pgx.Connect(ctx, connString) if err != nil { fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) os.Exit(1) } defer conn.Close(ctx) fmt.Println("Connection established") // Update a data row in the table _, err = conn.Exec(ctx, "UPDATE books SET in_stock = $1 WHERE title = $2;", true, "Dune") if err != nil { fmt.Fprintf(os.Stderr, "Update failed: %v\n", err) os.Exit(1) } fmt.Println("Updated stock status for 'Dune'.") } ``` The above code uses `conn.Exec` with a parameterized `UPDATE` statement to change the `in_stock` status of the book 'Dune'. Run the script using the following command: ```bash go run update_data.go ``` After running this script, you can run `read_data.go` again to verify that the row was updated. ```bash go run read_data.go ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `true`. ### Delete data In your project directory, create a file named `delete_data.go`. This script connects to your Neon database and deletes the book '1984' from the `books` table. ```go package main import ( "context" "fmt" "os" "github.com/jackc/pgx/v5" "github.com/joho/godotenv" ) func main() { err := godotenv.Load() if err != nil { fmt.Fprintf(os.Stderr, "Error loading .env file: %v\n", err) os.Exit(1) } connString := os.Getenv("DATABASE_URL") if connString == "" { fmt.Fprintf(os.Stderr, "DATABASE_URL not set\n") os.Exit(1) } ctx := context.Background() conn, err := pgx.Connect(ctx, connString) if err != nil { fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) os.Exit(1) } defer conn.Close(ctx) fmt.Println("Connection established") // Delete a data row from the table _, err = conn.Exec(ctx, "DELETE FROM books WHERE title = $1;", "1984") if err != nil { fmt.Fprintf(os.Stderr, "Delete failed: %v\n", err) os.Exit(1) } fmt.Println("Deleted the book '1984' from the table.") } ``` The above code uses `conn.Exec` with a parameterized `DELETE` statement to remove the book '1984' from the `books` table. Run the script using the following command: ```bash go run delete_data.go ``` After running this script, you can run `read_data.go` again to verify that the row was deleted. ```bash go run read_data.go ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Next steps: Using an ORM or framework While this guide demonstrates how to connect to Neon using raw SQL queries, for more advanced and maintainable data interactions in your Go applications, consider using an Object-Relational Mapping (ORM) framework. ORMs not only let you work with data as objects but also help manage schema changes through automated migrations keeping your database structure in sync with your application models. Explore the following resources to learn how to integrate ORMs with Neon: - [Connect a Go application to Neon using GORM](https://neon.com/guides/golang-gorm-postgres) ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Go and Neon using pgx](https://github.com/neondatabase/examples/tree/main/with-golang) ## Resources - [pgx Documentation](https://pkg.go.dev/github.com/jackc/pgx/v5) --- # Source: https://neon.com/llms/guides-grafana-cloud.txt # Grafana Cloud integration > The document details the steps for integrating Neon with Grafana Cloud, enabling users to monitor and visualize their Neon database metrics within the Grafana platform. ## Source - [Grafana Cloud integration HTML](https://neon.com/docs/guides/grafana-cloud): The original HTML version of this documentation What you will learn: - How to set up the Grafana Cloud integration - How to configure log forwarding - The full list of externally-available metrics External docs: - [Grafana Cloud OTLP Documentation](https://grafana.com/docs/grafana-cloud/send-data/otlp/) - [Grafana Cloud Authentication and Permissions](https://grafana.com/docs/grafana-cloud/account-management/authentication-and-permissions/) The Grafana Cloud integration lets you monitor Neon database performance, resource utilization, and system health directly from Grafana Cloud. The integration requires [OTEL support](https://neon.com/docs/guides/opentelemetry), which is available with Neon's Scale plan. ## How it works The integration uses Grafana Cloud's native OTLP endpoint to securely transmit Neon metrics and Postgres logs. By configuring the integration with your Grafana Cloud OTLP endpoint and authentication token, Neon automatically sends data from your project to your Grafana Cloud stack, where it's automatically routed to the appropriate storage backends (Mimir for metrics, Loki for logs, and Tempo for traces). **Note**: Data is sent for all computes in your Neon project. For example, if you have multiple branches, each with an attached compute, both metrics and logs will be collected and sent for each compute. ### Neon metrics The integration exports [a comprehensive set of metrics](https://neon.com/docs/guides/grafana-cloud#available-metrics) including: - **Connection counts** — Tracks active and idle database connections. - **Database size** — Monitors total size of all databases in bytes. - **Replication delay** — Measures replication lag in bytes and seconds. - **Compute metrics** — Includes CPU and memory usage statistics for your compute. ### Postgres logs **Note** Beta: **Postgres logs export** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). With the the Grafana Cloud integration, you can forward Postgres logs to your Grafana Cloud stack. These logs provide visibility into database activity, errors, and performance. See [Export Postgres logs to Grafana Cloud](https://neon.com/docs/guides/grafana-cloud#export-postgres-logs-to-grafana-cloud) for details. ## Prerequisites Before getting started, ensure the following: - You have a Neon account and project. If not, see [Sign up for a Neon account](https://neon.com/docs/get-started/signing-up). - You have a Grafana Cloud account access to the Grafana Cloud Portal. ## Set up the integration 1. **Get your Grafana Cloud OTLP configuration** 1. Sign in to the [Grafana Cloud Portal](https://grafana.com/orgs/) 1. Click on the **OpenTelemetry** card 1. Copy your OTLP endpoint URL and authentication credentials from the configuration details **Tip**: The Authentication key should follow the format `:` 2. **Configure the Neon OpenTelemetry integration** 1. In the Neon Console, navigate to the **Integrations** page in your Neon project 1. Locate the **OpenTelemetry** card and click **Add** 1. Select **HTTP** as the connection protocol (recommended) 1. Enter your Grafana Cloud OTLP endpoint URL 1. Choose **Bearer** authentication and paste your Grafana Cloud authentication token 1. Configure the `service.name` resource attribute (e.g., "neon-postgres-production") 1. Select what you want to export: - **Metrics**: System metrics and database statistics (CPU, memory, connections, etc.) - **Postgres logs**: Error messages, warnings, connection events, and system notifications 1. Click **Add** to complete the integration **Tip**: You can change these settings later by editing your integration configuration from the **Integrations** page. Once the integration is set up, Neon will start sending metrics and logs to your Grafana Cloud stack, where they'll be automatically stored in Mimir (metrics) and Loki (logs). **Note**: Neon computes only send logs and metrics when they are active. If the [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature is enabled and a compute is suspended due to inactivity, no logs or metrics will be sent during the suspension. This may result in gaps in your data. If you notice missing data, check if your compute is suspended. You can verify a compute's status as `Idle` or `Active` on the **Branches** page in the Neon console, and review **Suspend compute** events on the **System operations** tab of the **Monitoring** page. Additionally, if you are setting up the Grafana Cloud integration for a project with an inactive compute, you'll need to activate the compute before it can send data. To activate it, simply run a query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any connected client. ## Example usage Once integrated, you can explore your Neon metrics and logs in Grafana Cloud using the Explore feature. Navigate to **Explore** in your Grafana Cloud instance and query metrics like `neon_connection_counts`, `neon_db_total_size`, and `host_cpu_seconds_total` using your Prometheus data source. You can also create custom dashboards and set alerts based on threshold values for critical metrics. ## Import the Neon dashboard Import the provided dashboard JSON configuration to get started with pre-built visualizations: 1. In your Grafana Cloud stack, navigate to **Dashboards** → **New** → **Import** 2. Copy and paste the [dashboard JSON below](https://neon.com/docs/guides/grafana-cloud#dashboard-json) 3. Click **Load** and configure the dashboard settings 4. The dashboard will automatically detect your Neon metrics and display key performance indicators If any of the computes in your project are active, you should start seeing data in the resulting dashboard right away. By default, the dashboard shows metrics for all active endpoints in your project. You can filter results to one or more selected endpoints using the endpoint_id variable dropdown selector. ### Dashboard JSON Details: Copy Neon PostgreSQL Monitoring Dashboard JSON ```json { "id": null, "uid": "neon-complete-monitoring", "title": "Neon PostgreSQL", "description": "Comprehensive monitoring dashboard for Neon PostgreSQL with metrics and logs", "tags": ["neon", "postgresql", "database", "monitoring"], "timezone": "browser", "editable": true, "graphTooltip": 1, "refresh": "30s", "schemaVersion": 39, "version": 1, "time": { "from": "now-1h", "to": "now" }, "timepicker": { "refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"], "time_options": ["5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d"] }, "panels": [ { "id": 1, "title": "Database Overview", "type": "stat", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "sum(neon_connection_counts{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"})", "legendFormat": "Total Connections", "refId": "A" }, { "expr": "neon_db_total_size{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / 1024 / 1024 / 1024", "legendFormat": "Database Size (GB)", "refId": "B" }, { "expr": "neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / (neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} + neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}) * 100", "legendFormat": "Cache Hit Rate %", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Cache Hit Rate %"}, "properties": [{"id": "unit", "value": "percent"}, {"id": "max", "value": 100}] }, { "matcher": {"id": "byName", "options": "Database Size (GB)"}, "properties": [{"id": "unit", "value": "decbytes"}] } ] }, "gridPos": {"h": 6, "w": 24, "x": 0, "y": 0} }, { "id": 2, "title": "Connection Activity", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_connection_counts{state=\"active\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Active - {{datname}}", "refId": "A" }, { "expr": "neon_connection_counts{state=\"idle\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Idle - {{datname}}", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 6} }, { "id": 3, "title": "Database Size Growth", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_pg_stats_userdb{kind=\"db_size\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "{{datname}} Size", "refId": "A" }, { "expr": "neon_db_total_size{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Total Size", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "bytes", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 6} }, { "id": 4, "title": "CPU Usage", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "100 - (avg(rate(host_cpu_seconds_total{mode=\"idle\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])) * 100)", "legendFormat": "CPU Usage %", "refId": "A" }, { "expr": "rate(host_cpu_seconds_total{mode=\"system\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m]) * 100", "legendFormat": "System CPU %", "refId": "B" }, { "expr": "rate(host_cpu_seconds_total{mode=\"user\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m]) * 100", "legendFormat": "User CPU %", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "percent", "max": 100, "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 14} }, { "id": 5, "title": "Memory Usage", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "host_memory_total_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Total Memory", "refId": "A" }, { "expr": "host_memory_available_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Available Memory", "refId": "B" }, { "expr": "host_memory_cached_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Cached Memory", "refId": "C" }, { "expr": "host_memory_total_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} - host_memory_available_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Used Memory", "refId": "D" } ], "fieldConfig": { "defaults": { "unit": "bytes", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 14} }, { "id": 6, "title": "Database Activity Rates", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "rate(neon_pg_stats_userdb{kind=\"inserted\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Inserts/sec - {{datname}}", "refId": "A" }, { "expr": "rate(neon_pg_stats_userdb{kind=\"updated\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Updates/sec - {{datname}}", "refId": "B" }, { "expr": "rate(neon_pg_stats_userdb{kind=\"deleted\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Deletes/sec - {{datname}}", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "rps", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 22} }, { "id": 7, "title": "Cache Performance", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / (neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} + neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}) * 100", "legendFormat": "Cache Hit Rate %", "refId": "A" }, { "expr": "rate(neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Cache Hits/sec", "refId": "B" }, { "expr": "rate(neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Cache Misses/sec", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Cache Hit Rate %"}, "properties": [{"id": "unit", "value": "percent"}, {"id": "max", "value": 100}] } ] }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 22} }, { "id": 8, "title": "Replication Status", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_replication_delay_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Replication Delay (Bytes)", "refId": "A" }, { "expr": "neon_replication_delay_seconds{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Replication Delay (Seconds)", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Replication Delay (Bytes)"}, "properties": [{"id": "unit", "value": "bytes"}] }, { "matcher": {"id": "byName", "options": "Replication Delay (Seconds)"}, "properties": [{"id": "unit", "value": "s"}] } ] }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 30} }, { "id": 9, "title": "Deadlocks & Errors", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "increase(neon_pg_stats_userdb{kind=\"deadlocks\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Deadlocks - {{datname}}", "refId": "A" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 30} }, { "id": 10, "title": "PostgreSQL Error Logs", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)error|fatal|panic\"", "refId": "A" } ], "options": { "showTime": true, "showLabels": true, "showCommonLabels": false, "wrapLogMessage": true, "prettifyLogMessage": false, "enableLogDetails": true, "dedupStrategy": "none", "sortOrder": "Descending" }, "gridPos": {"h": 10, "w": 24, "x": 0, "y": 38} }, { "id": 11, "title": "Connection Events", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)connection|connect|disconnect\"", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 48} }, { "id": 12, "title": "Query Performance Logs", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)slow|duration|statement|query\" | logfmt", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 48} }, { "id": 13, "title": "Recent Log Activity", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "maxDataPoints": 1000, "gridPos": {"h": 10, "w": 24, "x": 0, "y": 56} } ], "templating": { "list": [ { "name": "DS_PROMETHEUS", "label": "Prometheus Datasource", "type": "datasource", "query": "prometheus", "hide": 0, "refresh": 1, "current": { "selected": false, "text": "Prometheus", "value": "prometheus" } }, { "name": "DS_LOKI", "label": "Loki Datasource", "type": "datasource", "query": "loki", "hide": 0, "refresh": 1, "current": { "selected": false, "text": "Loki", "value": "loki" } }, { "name": "endpoint_id", "label": "Endpoint ID", "type": "query", "query": { "query": "label_values(neon_connection_counts, endpoint_id)", "refId": "StandardVariableQuery" }, "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "refresh": 2, "multi": true, "includeAll": true, "allValue": ".*", "current": { "selected": false, "text": "All", "value": "$__all" } }, { "name": "project_id", "label": "Project ID", "type": "query", "query": { "query": "label_values(neon_connection_counts, project_id)", "refId": "StandardVariableQuery" }, "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "refresh": 2, "multi": true, "includeAll": true, "allValue": ".*", "current": { "selected": false, "text": "All", "value": "$__all" } }, { "name": "service_name", "label": "Service Name", "type": "query", "query": { "query": "label_values({__name__=~\".+\"}, service_name)", "refId": "StandardVariableQuery" }, "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "refresh": 2, "multi": false, "includeAll": false, "current": { "selected": false, "text": "", "value": "" } } ] }, "annotations": { "list": [ { "name": "High CPU", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "expr": "100 - (avg(rate(host_cpu_seconds_total{mode=\"idle\"}[5m])) * 100) > 80", "titleFormat": "High CPU Usage", "textFormat": "CPU usage is above 80%", "iconColor": "red" }, { "name": "Low Cache Hit Rate", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "expr": "neon_lfc_hits / (neon_lfc_hits + neon_lfc_misses) * 100 < 90", "titleFormat": "Low Cache Hit Rate", "textFormat": "Cache hit rate dropped below 90%", "iconColor": "yellow" } ] }, "links": [ { "title": "Neon Console", "url": "https://console.neon.tech", "type": "link", "icon": "external link" }, { "title": "Metrics Reference", "url": "https://neon.com/docs/reference/metrics-logs", "type": "link", "icon": "doc" } ] } ``` ## Available metrics Neon exports a comprehensive set of metrics including connection counts, database size, replication delay, and compute metrics (CPU and memory usage). For a complete list of all available metrics with detailed descriptions, see the [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs). ## Export Postgres logs You can export your Postgres logs from your Neon compute to your Grafana Cloud stack. These logs provide visibility into database activity, errors, and performance. The logs are automatically sent to Grafana Cloud Loki and can be queried using LogQL. ### Performance impact Enabling this feature may result in: - An increase in compute resource usage for log processing - Additional network egress for log transmission, which is billed after 100 GB on paid plans - Associated costs based on log volume in Grafana Cloud ### Querying logs in Grafana Cloud Once logs are flowing, you can query them in Grafana's Explore view using LogQL: ```logql # View all logs from your Neon service {service_name="your-service-name"} # Filter for errors only {service_name="your-service-name"} |= "ERROR" # View connection events {service_name="your-service-name"} |= "connection" ``` ## Set up alerts Create alerts for key metrics to monitor your database health: 1. **High CPU Usage**: Alert when CPU usage exceeds 80% ```promql rate(host_cpu_seconds_total{mode!="idle"}[5m]) > 0.8 ``` 2. **Low Cache Hit Rate**: Alert when cache hit rate drops below 90% ```promql neon_lfc_hits / (neon_lfc_hits + neon_lfc_misses) < 0.9 ``` 3. **High Connection Count**: Alert when connections exceed your threshold ```promql sum(neon_connection_counts) > 100 ``` ## Feedback and future improvements We're always looking to improve! If you have feature requests or feedback, please let us know via the [Feedback form](https://console.neon.tech/app/projects?modal=feedback) in the Neon Console or on our [Discord channel](https://discord.com/channels/1176467419317940276/1176788564890112042). --- # Source: https://neon.com/llms/guides-grafbase.txt # Use Grafbase Edge Resolvers with Neon > The document explains how to integrate Grafbase Edge Resolvers with Neon, detailing the steps to configure and deploy a serverless GraphQL API that interacts with Neon's PostgreSQL database. ## Source - [Use Grafbase Edge Resolvers with Neon HTML](https://neon.com/docs/guides/grafbase): The original HTML version of this documentation _This guide was contributed by Josep Vidal from Grafbase_ Grafbase allows you to combine your data sources into a centralized GraphQL endpoint and deploy a serverless GraphQL backend. This guide describes how to create a GraphQL API using Grafbase and use Grafbase [Edge Resolvers](https://grafbase.com/docs/edge-gateway/resolvers) with the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to interact with your Neon database at the edge. The example project in this guide simulates a marketplace of products, where the product price is dynamically calculated based on data retrieved from your Neon database. ## Prerequisites - The [Grafbase CLI](https://grafbase.com/cli) - A Neon project. See [Create a Neon project](https://neon.com/docs/manage/projects#create-a-project). ## Create a backend with Grafbase 1. Create a directory and initialize your Grafbase project by running the following commands: ```bash npx grafbase init grafbase-neon cd grafbase-neon ``` 2. In your project directory, open the `grafbase/schema.graphql` file and replace the existing content with the following schema: ```graphql extend type Mutation { addProductVisit(productId: ID!): ID! @resolver(name: "add-product-visit") } type Product @model { name: String! price: Float @resolver(name: "product/price") } ``` ## Create the schema in Neon 1. Navigate to the Neon Console and select your project. 2. Open the Neon **SQL Editor** and run the following `CREATE TABLE` statement: ```sql CREATE TABLE product_visits(id SERIAL PRIMARY KEY, product_id TEXT NOT NULL); ``` The `product_visits` table stores product page view data that the application uses to dynamically calculate a product price. ## Create the resolver files The schema includes an `addProductVisit` query and `prodcut/price` field. Create resolvers for those by creating the following files in your project directory: - `grafbase/resolvers/add-product-visit.js` - `grafbase/resolvers/product/price.js` You can use the following commands to create the files: ```bash cd grafbase mkdir resolvers cd resolvers touch add-product-visit.js mkdir product cd product touch price.js ``` You will add code to these files in a later step. ## Install the Neon serverless driver Inside the `grafbase` directory in your project, run the following commands to install the Neon serverless driver: ```bash cd .. npm init -y npm install @neondatabase/serverless ``` ## Retrieve your Neon connection string A database connection string is required to forward queries to your Neon database. You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. 1. Navigate to the Neon **Project Dashboard**. 2. Click **Connect** and copy the connection string for your database. The connection string should appear similar to the following: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` 3. Add a `DATABASE_URL` environment variable to your `grafbase/.env` file and set the value to your connection string. For example: ```text DATABASE_URL=postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## Add code to the resolvers 1. In the `resolvers/product/add-product-visit` resolver, add the following code, which inserts a new record in the `product_visits` table with a `productId` each time the resolver is queried. ```javascript # grafbase/resolvers/add-product-visit.js import { Client } from '@neondatabase/serverless' export default async function Resolver(_, { productId }) { const client = new Client(process.env.DATABASE_URL) await client.connect() await client.query( `INSERT INTO product_visits (product_id) VALUES ('${productId}')` ) await client.end() return productId } ``` 2. In the `grafbase/resolvers/product/price.js` resolver, add the following code, which calculates the product price based on the number of product visits (the number of visits represents customer interest in the product). ```javascript # grafbase/resolvers/product/price.js import { Client } from '@neondatabase/serverless' export default async function Resolver({ id }) { const client = new Client(process.env.DATABASE_URL) await client.connect() const { rows: [{ count }] } = await client.query( `SELECT COUNT(*) FROM product_visits WHERE product_id = '${id}'` ) await client.end() return Number.parseInt(count) } ``` ## Test the resolvers To test the resolvers with Neon, perform the following steps: 1. Start the Grafbase CLI: ```bash npx grafbase dev ``` 2. Go to [http://localhost:4000](http://localhost:4000) and execute the following GraphQL mutation, which creates a new product: ```graphql mutation { productCreate(input: { name: "Super Product" }) { product { id name } } } ``` 3. Use the product `id` to execute the following mutation, which adds a row to the database table in Neon: ```graphql mutation { addProductVisit(productId: "PREVIOUS_PRODUCT_ID") } ``` 4. Query the same product, and check the price: ```graphql query { product(input: { by: "PREVIOUS_PRODUCT_ID" }) { id name price } } ``` 5. Run the query several more times and watch how the price increases as "interest" in the product increases. --- # Source: https://neon.com/llms/guides-hasura.txt # Connect from Hasura Cloud to Neon > The document outlines the steps for connecting Hasura Cloud to Neon, detailing the configuration process for establishing a secure and efficient integration between the two platforms. ## Source - [Connect from Hasura Cloud to Neon HTML](https://neon.com/docs/guides/hasura): The original HTML version of this documentation Hasura Cloud is an open source GraphQL engine that provides a scalable, highly available, globally distributed, secure GraphQL API for your data sources. ## Connecting to a new Neon database Use the following instructions to connect to a new Neon database. This connection method authenticates you from Hasura Cloud. 1. Navigate to [Hasura Cloud](https://cloud.hasura.io/projects) and sign up or log in. 2. On the Hasura Cloud dashboard, click **Create a project** to create a new Hasura project. 3. After the project is initialized, click **Launch Console** to open the Hasura Console. 4. On the Hasura Console, Select **Data** from the top navigation bar. 5. Click **Postgres** > **Connect Neon Database**. 6. When prompted to login or sign up for Neon, we recommend selecting **Hasura** for seamless authentication. 7. You will be redirected to an Oauth page to authorize Hasura to access your Neon account. Click **Authorize** to allow Hasura to create a new Neon project and database. After authenticating, a new Neon Postgres database is created and connected to your Hasura project, and the Neon project connection string is associated with the `PG_DATABASE_URL` environment variable. To start exploring Hasura's GraphQL API with data stored in Neon, see [Load a template in Hasura](https://neon.com/docs/guides/hasura#load-a-template-in-hasura-optional). ## Connecting to an existing Neon database Use the following instructions to connect to an existing Neon database from Hasura Cloud. The connection is configured manually using a connection string. ### Prerequisites - An existing Neon account. If you do not have one, see [Sign up](https://neon.com/docs/get-started/signing-up). - An existing Neon project. If you do not have a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - A connection string for a database in your Neon project: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ### Add Neon as a data source The following steps describe how to navigate to Hasura Cloud and connect to your Neon project. 1. Navigate to [Hasura Cloud](https://cloud.hasura.io/projects) and sign up or log in. 2. Click **Create Project** to create a Hasura Cloud project or click **Launch Console** to open an existing project. 3. In the Hasura Console, select **Data** from the top navigation bar. 4. Click **Postgres** > **Connect Existing Database**. 5. Paste your connection string into the **Database URL** field. **Tip**: To enhance security and manageability, consider using environment variables in Hasura instead of hardcoding the connection string. To do this, navigate to **Hasura Project settings** > **Env vars** > **New env var** and create a new variable (e.g., `NEON_DATABASE_URL`) with your connection string as its value. Then, in the connection tab, select **Connect database via Environment variable** and enter the variable name you created. This approach keeps your connection string secure and simplifies future updates. 6. Enter a display name for your database in the **Database name** field, and click **Connect Database**. Hasura Cloud connects to your Neon project and automatically discovers the default `public` schema. To start exploring Hasura's GraphQL API with data stored in Neon, see [Load a template in Hasura](https://neon.com/docs/guides/hasura#load-a-template-in-hasura-optional). ## Load a template in Hasura (optional) Optionally, after connecting from your Hasura project to Neon, you can explore Hasura's GraphQL API by loading a template from Hasura's template gallery. Follow these steps to load the `Welcome to Hasura` template, which creates `customer` and `order` tables and populates them with sample data. 1. In the Hasura Console, select **Data**. 2. Under **Data Manager**, select your database. 3. From the **Template Gallery**, select **Welcome to Hasura** to install the template. To view the newly created tables from the Neon Console: 1. In the Hasura Console, select **Data** > **Manage your Neon databases** to open the Neon Console. 2. In the Neon Console, select your project. 3. Select the **Tables** tab. The newly created `customer` and `order` tables should appear under the **Tables** heading in the sidebar. ## Import existing data to Neon If you are migrating from Hasura with Heroku Postgres to Neon, refer to the [Import data from Heroku](https://neon.com/docs/import/migrate-from-heroku) guide for data import instructions. For general data import instructions, see [Import data from Postgres](https://neon.com/docs/import/migrate-from-postgres). ## Maximum connections configuration In Neon, the maximum number of concurrent connections is defined according to the size of your compute. For example, a 0.25 vCPU compute in Neon supports 112 connections. The connection limit is higher with larger compute sizes (see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute)). You can also enable connection pooling in Neon to support up to 10,000 concurrent connections. However, it is important to note that Hasura has a `HASURA_GRAPHQL_PG_CONNECTIONS` setting that limits Postgres connections to `50` by default. If you start encountering errors related to "max connections", try increasing the value of this setting as a first step, staying within the connection limit for your Neon compute. For information about the Hasura connection limit setting, refer to the [Hasura Postgres configuration documentation](https://hasura.io/docs/latest/deployment/performance-tuning/#postgres-configuration). ## Scale to zero considerations Neon suspends a compute after five minutes (300 seconds) of inactivity. This behavior can be adjusted on Neon's paid plans. For more information, refer to [Configuring Scale to zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). If you rely on Neon's scale to zero feature to minimize database usage, note that certain Hasura configuration options can keep your Neon compute in an active state: - [Event triggers](https://hasura.io/docs/latest/event-triggers/overview/) may periodically poll your Neon database for new events. - [Cron triggers](https://hasura.io/docs/latest/scheduled-triggers/create-cron-trigger/) can invoke HTTP endpoints that execute custom business logic involving your Neon database. - [Source Health Checks](https://hasura.io/docs/latest/deployment/health-checks/source-health-check/) can keep your Neon compute active if the metadata database resides in Neon. --- # Source: https://neon.com/llms/guides-heroku.txt # Deploy Your Node.js App with Neon Postgres on Heroku > This document guides users through deploying a Node.js application on Heroku using Neon Postgres, detailing steps for setting up the environment, configuring the database, and deploying the app. ## Source - [Deploy Your Node.js App with Neon Postgres on Heroku HTML](https://neon.com/docs/guides/heroku): The original HTML version of this documentation [Heroku](https://heroku.com) is a popular platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud. It simplifies the deployment process, making it a favorite among developers for its ease of use and integration capabilities. This guide walks you through deploying a simple Node.js application connected to a Neon Postgres database, on Heroku. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Heroku account. Sign up at [Heroku](https://signup.heroku.com/) to get started. - Git installed on your local machine. Heroku uses Git for version control and deployment. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting Up Your Neon Database ### Initialize a New Project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Click **New Project** to create a new project. 3. In your project dashboard, go to the **SQL Editor** and run the following SQL command to create a new table: ```sql CREATE TABLE music_albums ( album_id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, artist VARCHAR(255) NOT NULL ); INSERT INTO music_albums (title, artist) VALUES ('Rumours', 'Fleetwood Mac'), ('Abbey Road', 'The Beatles'), ('Dark Side of the Moon', 'Pink Floyd'), ('Thriller', 'Michael Jackson'); ``` ### Retrieve your Neon database connection string You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Implementing the Node.js Application We'll create a simple Express application that connects to our Neon database and retrieves the list of music albums. Run the following commands in your terminal to set it up: ```bash mkdir neon-heroku-example && cd neon-heroku-example npm init -y && npm pkg set type="module" && npm pkg set scripts.start="node index.js" npm install express pg touch .env ``` We use the `npm pkg set type="module"` command to enable ES6 module support in our project. We'll also create a new `.env` file to store the `DATABASE_URL` environment variable, which we'll use to connect to our Neon database. Lastly, we install the `pg` library which is the Postgres driver we use to connect to our database. In the `.env` file, store your Neon database connection string: ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` Now, create a new file named `index.js` and add the following code: ```javascript import express from 'express'; import pkg from 'pg'; const app = express(); const port = process.env.PORT || 3000; // Parse JSON bodies for this app app.use(express.json()); // Create a new pool using your Neon database connection string const { Pool } = pkg; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); app.get('/', async (req, res) => { try { // Fetch the list of music albums from your database using the postgres connection const { rows } = await pool.query('SELECT * FROM music_albums;'); res.json(rows); } catch (error) { console.error('Failed to fetch albums', error); res.status(500).json({ error: 'Internal Server Error' }); } }); // Start the server app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); ``` This code sets up an Express server that listens for requests on port 3000. When a request is made to the `URL`, the server queries the `music_albums` table in your Neon database and returns the results as JSON. We can test this application locally by running: ```bash node --env-file=.env index.js ``` Now, navigate to `http://localhost:3000/` in your browser to check it returns the sample data from the `music_albums` table. ## Deploying to Heroku ### Create a New Heroku App We will use the `Heroku CLI` to deploy our application to Heroku manually. You can install it on your machine by following the instructions [here](https://devcenter.heroku.com/articles/heroku-cli). Once installed, log in to your Heroku account using: ```bash ❯ heroku login › Warning: Our terms of service have changed: › https://dashboard.heroku.com/terms-of-service heroku: Press any key to open up the browser to login or q to exit: Opening browser to https://cli-auth.heroku.com/auth/cli/browser/... ``` You will be prompted to log in to your Heroku account in the browser. After logging in, you can close the browser and return to your terminal. Before creating the Heroku application, we need to initialize a new Git repository in our project folder: ```bash git init && echo "node_modules" > .gitignore && echo ".env" >> .gitignore git branch -M main git add . && git commit -m "Initial commit" ``` Next, we can create a new app on Heroku using the following command. This creates a new Heroku app with the name `neon-heroku-example`, and sets up a new Git remote for the app called `heroku`. ```bash heroku create neon-heroku-example ``` You'll also need to set the `DATABASE_URL` on Heroku to your Neon database connection string: ```bash heroku config:set DATABASE_URL='NEON_DATABASE_CONNECTION_STRING' -a neon-heroku-example ``` ### Deploy Your Application To deploy your application to Heroku, use the following command to push your code to the `heroku` remote. Heroku will automatically detect that your application is a Node.js application, install the necessary dependencies and deploy it. ```bash > git push heroku main . . . remote: -----> Launching... remote: Released v4 remote: https://neon-heroku-example-fda03f6acbbe.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. remote: 2024/02/21 07:26:49 Rollbar error: empty token To https://git.heroku.com/neon-heroku-example.git remote: Verifying deploy... done. ``` Once the deployment is complete, you should see a message with the URL of your deployed application. Navigate to this URL in your browser to see your application live on Heroku. You've now successfully deployed a Node.js application on Heroku that connects to a Neon Postgres database. For further customization and scaling options, you can explore the Heroku and Neon documentation. ## Removing Your Application and Neon Project To remove your application from Heroku, select the app from your [Heroku dashboard](https://dashboard.heroku.com/apps). Navigate to the `Settings` tab and scroll down to the end to find the "Delete App" option. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon with Heroku](https://github.com/neondatabase/examples/tree/main/deploy-with-heroku): Deploying a Node application with a Neon Postgres database on Heroku ## Resources - [Heroku Documentation](https://devcenter.heroku.com/) - [Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) - [Neon](https://neon.com/docs) - [Import data from Heroku Postgres to Neon](https://neon.com/docs/import/migrate-from-heroku) --- # Source: https://neon.com/llms/guides-hono.txt # Connect a Hono application to Neon > This document guides users on connecting a Hono application to a Neon database, detailing the necessary configuration steps and code adjustments required for seamless integration. ## Source - [Connect a Hono application to Neon HTML](https://neon.com/docs/guides/hono): The original HTML version of this documentation [Hono](https://hono.dev/) is a lightweight, multi-runtime web framework for the Edge, Node.js, Deno, Bun, and other runtimes. This topic describes how to create a Neon project and access it from a Hono application. To create a Neon project and access it from a Hono application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Hono project and add dependencies 1. Create a Hono project if you do not have one. For instructions, see [Quick Start](https://hono.dev/docs/getting-started/basic), in the Hono documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your connection details by clicking **Connect** on the Neon **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client In your Hono application (e.g., in `src/index.ts` or a specific route file), import the driver and use it within your route handlers. Here's how you can set up a simple route to query the database: Tab: node-postgres ```typescript import { Pool } from 'pg'; import { Hono } from 'hono'; import { serve } from '@hono/node-server'; const app = new Hono(); const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); app.get('/', async (c) => { const client = await pool.connect(); try { const { rows } = await client.query('SELECT version()'); return c.json({ version: rows[0].version }); } catch (error) { console.error('Database query failed:', error); return c.text('Failed to connect to database', 500); } finally { client.release(); } }); serve(app); ``` Tab: postgres.js ```typescript import { Hono } from 'hono'; import postgres from 'postgres'; import { serve } from '@hono/node-server'; const app = new Hono(); app.get('/', async (c) => { try { const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return c.json({ version: response[0].version }); } catch (error) { console.error('Database query failed:', error); return c.text('Failed to connect to database', 500); } }); serve(app); ``` Tab: Neon serverless driver ```typescript import { Hono } from 'hono'; import { serve } from '@hono/node-server'; import { neon } from '@neondatabase/serverless'; const app = new Hono(); app.get('/', async (c) => { try { const sql = neon(process.env.DATABASE_URL); const response = await sql`SELECT version()`; return c.json({ version: response[0]?.version }); } catch (error) { console.error('Database query failed:', error); return c.text('Failed to connect to database', 500); } }); serve(app); ``` ## 5. Run the app Start your Hono development server. You can use the following command: ```bash npm run dev ``` Navigate to your application's URL ([localhost:3000](http://localhost:3000)). You should see a JSON response with the PostgreSQL version: ```json { "version": "PostgreSQL 17.4 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit" } ``` > The specific version may vary depending on the PostgreSQL version you are using. ## Source code You can find a sample Hono application configured for Neon on GitHub: - [Get started with Hono and Neon](https://github.com/neondatabase/examples/tree/main/with-hono) --- # Source: https://neon.com/llms/guides-imagekit.txt # Media storage with ImageKit.io > The document outlines how to integrate ImageKit.io with Neon for efficient media storage, detailing configuration steps and usage instructions specific to Neon's platform. ## Source - [Media storage with ImageKit.io HTML](https://neon.com/docs/guides/imagekit): The original HTML version of this documentation [ImageKit.io](https://imagekit.io/) is a cloud-based image and video optimization and delivery platform. It provides real-time manipulation, storage, and delivery via a global CDN, simplifying media management for web and mobile applications. This guide demonstrates how to integrate ImageKit.io with Neon. You'll learn how to upload files directly from the client-side to ImageKit.io using securely generated authentication parameters from your backend, and then store the resulting file metadata (like the ImageKit File ID and URL) in your Neon database. ## Setup steps ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create an ImageKit.io account and get credentials 1. Sign up for a free or paid account at [ImageKit.io](https://imagekit.io/registration). 2. Once logged in, navigate to the **Developer options** section in the dashboard sidebar. 3. Under **API Keys**, note your **Public Key**, **Private Key**, and **URL Endpoint**. These are essential for interacting with the ImageKit API and SDKs. ## Create a table in Neon for file metadata We need a table in Neon to store metadata about the files uploaded to ImageKit.io. This allows your application to reference the media stored in ImageKit. 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a client like [psql](https://neon.com/docs/connect/query-with-psql-editor). Create a table to store relevant details: ```sql CREATE TABLE IF NOT EXISTS imagekit_files ( id SERIAL PRIMARY KEY, file_id TEXT NOT NULL UNIQUE, -- ImageKit.io unique File ID file_url TEXT NOT NULL, -- ImageKit CDN URL for the file user_id TEXT NOT NULL, -- User associated with the file upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. You can customize this table by adding or removing columns (like `width`, `height`, `tags`, etc.) based on the information you need from ImageKit and your application's requirements. **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `imagekit_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata in Neon. Access control for the actual files on ImageKit is managed via ImageKit features (like private files or signed URLs, if needed). The default setup makes files publicly accessible via their URL. ## Upload files to ImageKit.io and store metadata in Neon The recommended approach for client-side uploads is to generate secure **authentication parameters** on your backend. The client (e.g., a web browser) uses these parameters, along with your public API key, to upload the file directly to ImageKit's Upload API. After a successful upload, the client sends the returned metadata (like `fileId` and `url`) back to your backend to be saved in Neon. This requires two backend endpoints: 1. `/generate-auth-params`: Generates temporary authentication parameters (`token`, `expire`, `signature`). 2. `/save-metadata`: Receives file metadata from the client after a successful upload to ImageKit and saves it to the Neon database. Tab: JavaScript We'll use [Hono](https://hono.dev/) for the server, [`imagekit `](https://www.npmjs.com/package/imagekit) for ImageKit interaction, and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) for Neon. First, install the necessary dependencies: ```bash npm install imagekit @neondatabase/serverless @hono/node-server hono dotenv ``` Create a `.env` file with your credentials: ```env # ImageKit.io Credentials IMAGEKIT_PUBLIC_KEY=your_imagekit_public_key IMAGEKIT_PRIVATE_KEY=your_imagekit_private_key IMAGEKIT_URL_ENDPOINT=your_imagekit_url_endpoint # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import ImageKit from 'imagekit'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; const imagekit = new ImageKit({ publicKey: process.env.IMAGEKIT_PUBLIC_KEY, privateKey: process.env.IMAGEKIT_PRIVATE_KEY, urlEndpoint: process.env.IMAGEKIT_URL_ENDPOINT, }); const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic const authMiddleware = async (c, next) => { // Example: Validate JWT, session, etc. and set user ID c.set('userId', 'user_123'); // Static ID for demonstration await next(); }; // 1. Generate authentication parameters for client-side upload app.get('/generate-auth-params', authMiddleware, (c) => { try { const authParams = imagekit.getAuthenticationParameters(); // These params (token, expire, signature) are sent to the client // The client uses these + public key to upload directly to ImageKit return c.json({ success: true, ...authParams }); } catch (error) { console.error('Auth Param Generation Error:', error); return c.json({ success: false, error: 'Failed to generate auth params' }, 500); } }); // 2. Save metadata after client confirms successful upload to ImageKit app.post('/save-metadata', authMiddleware, async (c) => { try { const userId = c.get('userId'); // Client sends metadata received from ImageKit after upload const { fileId, url } = await c.req.json(); if (!fileId || !url) { throw new Error('fileId and url are required from ImageKit response'); } // Insert metadata into Neon database await sql` INSERT INTO imagekit_files (file_id, file_url, user_id) VALUES (${fileId}, ${url}, ${userId}) `; console.log(`Metadata saved for ImageKit file: ${fileId}`); return c.json({ success: true }); } catch (error) { console.error('Metadata Save Error:', error.message); return c.json({ success: false, error: 'Failed to save metadata' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** Initializes the Neon database client (`sql`), the Hono web framework (`app`), and the ImageKit Node.js SDK (`imagekit`) using credentials from environment variables. 2. **Authentication:** Includes a placeholder `authMiddleware`. **Replace this with your actual user authentication logic** to ensure only authenticated users can generate upload parameters and save metadata. 3. **API endpoints:** - **`/generate-auth-params` (GET):** Uses the ImageKit SDK's `getAuthenticationParameters()` method to create a short-lived `token`, `expire` timestamp, and `signature`. These are returned to the client. - **`/save-metadata` (POST):** This endpoint is called by the client _after_ it has successfully uploaded a file directly to ImageKit's Upload API. The client sends the relevant metadata returned by ImageKit (like `fileId`, `url`, `thumbnailUrl`, etc.). The endpoint then inserts this metadata, along with the authenticated `userId`, into the `imagekit_files` table in Neon. Tab: Python We'll use [Flask](https://flask.palletsprojects.com/en/stable/), [`imagekitio`](https://pypi.org/project/imagekitio/) (ImageKit Python SDK), and [`psycopg2`](https://pypi.org/project/psycopg2/). First, install the necessary dependencies: ```bash pip install Flask imagekitio psycopg2-binary python-dotenv ``` Create a `.env` file with your credentials: ```env # ImageKit.io Credentials IMAGEKIT_PUBLIC_KEY=your_imagekit_public_key IMAGEKIT_PRIVATE_KEY=your_imagekit_private_key IMAGEKIT_URL_ENDPOINT=your_imagekit_url_endpoint # e.g., https://ik.imagekit.io/your_instance_id # Neon Connection String DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import psycopg2 from dotenv import load_dotenv from flask import Flask, jsonify, request from imagekitio.client import ImageKit load_dotenv() imagekit = ImageKit( public_key=os.getenv("IMAGEKIT_PUBLIC_KEY"), private_key=os.getenv("IMAGEKIT_PRIVATE_KEY"), url_endpoint=os.getenv("IMAGEKIT_URL_ENDPOINT"), ) app = Flask(__name__) # Use a global PostgreSQL connection pool in production instead of connecting per request def get_db_connection(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic def get_authenticated_user_id(request): # Example: Validate Authorization header, session cookie, etc. return "user_123" # Static ID for demonstration # 1. Generate authentication parameters for client-side upload @app.route("/generate-auth-params", methods=["GET"]) def generate_auth_params_route(): try: user_id = get_authenticated_user_id(request) if not user_id: return jsonify({"success": False, "error": "Unauthorized"}), 401 # Generate token, expire timestamp, and signature auth_params = imagekit.get_authentication_parameters() return jsonify( { "success": True, "token": auth_params["token"], "expire": auth_params["expire"], "signature": auth_params["signature"] } ), 200 except Exception as e: print(f"Auth Param Generation Error: {e}") return ( jsonify({"success": False, "error": "Failed to generate auth params"}), 500, ) # 2. Save metadata after client confirms successful upload to ImageKit @app.route("/save-metadata", methods=["POST"]) def save_metadata_route(): conn = None cursor = None try: user_id = get_authenticated_user_id(request) if not user_id: return jsonify({"success": False, "error": "Unauthorized"}), 401 data = request.get_json() file_id = data.get("fileId") url = data.get("url") if not file_id or not url: raise ValueError("fileId and url are required from ImageKit response") # Insert metadata into Neon database conn = get_db_connection() cursor = conn.cursor() cursor.execute( """ INSERT INTO imagekit_files (file_id, file_url, user_id) VALUES (%s, %s, %s) """, (file_id, url, user_id), ) conn.commit() print(f"Metadata saved for ImageKit file: {file_id}") return jsonify({"success": True}), 201 except (psycopg2.Error, ValueError) as e: print(f"Metadata Save Error: {e}") return ( jsonify({"success": False, "error": "Failed to save metadata"}), 500, ) except Exception as e: print(f"Unexpected Metadata Save Error: {e}") return jsonify({"success": False, "error": "Server error"}), 500 finally: if cursor: cursor.close() if conn: conn.close() if __name__ == "__main__": app.run(port=3000, debug=True) ``` **Explanation** 1. **Setup:** Initializes the Flask web framework (`app`), the PostgreSQL client function (`get_db_connection`), and the ImageKit Python SDK (`imagekit`) using environment variables. 2. **Authentication:** Includes a placeholder `get_authenticated_user_id` function. **Replace this with your actual user authentication logic.** 3. **API endpoints:** - **`/generate-auth-params` (GET):** Uses the ImageKit SDK's `get_authentication_parameters()` method to create `token`, `expire`, and `signature`. These are returned to the client, usually as JSON. - **`/save-metadata` (POST):** Called by the client _after_ it has successfully uploaded a file directly to ImageKit. The client provides the metadata returned by ImageKit. The backend validates the required fields and inserts the data along with the `userId` into the `imagekit_files` table in Neon using `psycopg2`. 4. **Database Connection:** The example shows creating a new connection per request. In production, use a global connection pool for better performance. ## Testing the upload workflow This workflow involves getting authentication parameters from your backend, using those parameters to upload the file directly to ImageKit via `curl`, and then notifying your backend to save the metadata. 1. **Get authentication parameters:** Send a `GET` request to your backend's `/generate-auth-params` endpoint. ```bash curl -X GET http://localhost:3000/generate-auth-params ``` **Expected response:** A JSON object containing the necessary parameters. For example: ```json { "success": true, "token": "20xxxx-xxxx-xxxx-a350-a463b3dd544e", "expire": 1745435716, "signature": "ffxxxxxx5f19b6a22e2bd6bd90ae8a7db21" } ``` 2. **Upload file directly to ImageKit:** Use the parameters obtained in Step 1, your **ImageKit Public Key**, and the file path to send a `POST` request with `multipart/form-data` directly to the ImageKit Upload API. ```bash curl -X POST https://upload.imagekit.io/api/v1/files/upload \ -F "file=@/path/to/your/test-image.png" \ -F "publicKey=" \ -F "token=" \ -F "expire=" \ -F "signature=" \ -F "fileName=test-image.png" \ -F "useUniqueFileName=true" ``` **Expected response (from ImageKit):** A successful upload returns a JSON object with details about the uploaded file. Note the `fileId`, `url`, etc. ```json { "fileId": "", "name": "", "size": "", "versionInfo": { "id": "", "name": "Version 1" }, "filePath": "", "url": "https://ik.imagekit.io//", "fileType": "image", "height": , "width": , "thumbnailUrl": "https://ik.imagekit.io//tr:n-ik_ml_thumbnail/", "AITags": null } ``` 3. **Save metadata:** Send a `POST` request to your backend's `/save-metadata` endpoint, providing the key details (like `fileId`, `url`) received from ImageKit in Step 2. ```bash curl -X POST http://localhost:3000/save-metadata \ -H "Content-Type: application/json" \ -d '{ "fileId": "", "url": "" }' ``` **Expected response (from your backend):** ```json { "success": true } ``` **Expected outcome:** - The file is successfully uploaded to your ImageKit Media Library. - You can verify a new row corresponding to the uploaded file exists in your `imagekit_files` table in Neon. ## Accessing file metadata and files With metadata stored in Neon, your application can easily retrieve references to the media hosted on ImageKit.io. Query the `imagekit_files` table from your application's backend whenever you need to display or link to uploaded files. **Example SQL query:** Retrieve files associated with a specific user: ```sql SELECT id, -- Your database primary key file_id, -- ImageKit File ID file_url, -- Base ImageKit CDN URL for the file user_id, upload_timestamp FROM imagekit_files WHERE user_id = 'user_123'; -- Use the actual authenticated user ID ``` **Using the data:** - The query returns rows containing the file metadata stored in Neon. - The `file_url` is the direct link to the file on ImageKit's CDN. You can use this directly in `` tags, video players, or links. - **ImageKit transformations:** A key benefit of ImageKit is real-time manipulation. You can append transformation parameters directly to the `file_url` to resize, crop, format, or optimize the media on-the-fly. For example, `file_url + '?tr=w-300,h-200'` would resize an image to 300x200 pixels. Learn more on [ImageKit transformation docs](https://imagekit.io/docs/image-transformation) for possibilities. This pattern separates media storage, optimization, and delivery (handled by ImageKit.io) from structured metadata management (handled by Neon). ## Resources - [ImageKit.io documentation](https://imagekit.io/docs) - [ImageKit.io Upload API](https://imagekit.io/docs/api-reference/upload-file/upload-file) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-integrations.txt # Neon integration guides > The "Neon integration guides" document details the procedures for integrating Neon with various third-party services and tools, facilitating seamless connectivity and functionality within the Neon ecosystem. ## Source - [Neon integration guides HTML](https://neon.com/docs/guides/integrations): The original HTML version of this documentation ## Monitor - [Datadog](https://neon.com/docs/guides/datadog): Send metrics and events from Neon Postgres to Datadog - [Grafana Cloud](https://neon.com/docs/guides/grafana-cloud): Send metrics and logs from Neon Postgres to Grafana Cloud - [OpenTelemetry](https://neon.com/docs/guides/opentelemetry): Send metrics and events from Neon to any OpenTelemetry compatible backend ## Deploy - [Vercel](https://neon.com/docs/guides/vercel-overview): Learn how to integrate Neon with Vercel - [Cloudflare Pages](https://neon.com/docs/guides/cloudflare-pages): Use Neon with Cloudflare Pages - [Cloudflare Workers](https://neon.com/docs/guides/cloudflare-workers): Use Neon with Cloudflare Workers - [Deno Deploy](https://neon.com/docs/guides/deno): Use Neon with Deno Deploy - [Heroku](https://neon.com/docs/guides/heroku): Deploy Your App with Neon Postgres on Heroku - [Koyeb](https://neon.com/docs/guides/koyeb): Use Neon with Koyeb - [Netlify Functions](https://neon.com/docs/guides/netlify-functions): Connect a Neon Postgres database to your Netlify Functions application - [Railway](https://neon.com/docs/guides/railway): Use Neon Postgres with Railway - [Render](https://neon.com/docs/guides/render): Use Neon Postgres with Render ## Serverless - [Neon](https://neon.com/docs/serverless/serverless-driver): Connect with the Neon serverless driver - [AWS Lambda](https://neon.com/docs/guides/aws-lambda): Connect from AWS Lambda to Neon - [Azure Functions](https://neon.com/guides/query-postgres-azure-functions): Connect from Azure Functions to Neon ## Query - [Exograph](https://neon.com/docs/guides/exograph): Use Exograph with Neon - [PostgREST](https://neon.com/docs/guides/postgrest): Create a REST API from your Neon database - [Grafbase](https://neon.com/docs/guides/grafbase): Use Grafbase Edge Resolvers with Neon - [Hasura](https://neon.com/docs/guides/hasura): Connect from Hasura Cloud to Neon - [Cloudflare Hyperdrive](https://neon.com/docs/guides/cloudflare-hyperdrive): Use Neon with Cloudflare Hyperdrive - [Ask Your Database](https://neon.com/docs/guides/askyourdatabase): Chat with your Neon Postgres database with AskYourDatabase - [StepZen](https://neon.com/docs/guides/stepzen): Use StepZen with Neon - [Wundergraph](https://neon.com/docs/guides/wundergraph): Use Wundergraph with Neon - [Outerbase](https://neon.com/docs/guides/outerbase): Connect Outerbase to Neon ## Develop - [GitHub integration](https://neon.com/docs/guides/neon-github-app): Use the Neon GitHub integration - [Prisma](https://neon.com/docs/guides/prisma): Connect from Prisma to Neon - [TypeORM](https://neon.com/docs/guides/typeorm): Connect from TypeORM to Neon - [Knex](https://neon.com/docs/guides/knex): Connect from Knex to Neon - [Convex](https://neon.com/guides/convex-neon): Integrate Convex with Neon Postgres ## Replicate data from Neon - [Airbyte](https://neon.com/docs/guides/logical-replication-airbyte): Replicate data from Neon with Airbyte - [Bemi](https://neon.com/docs/guides/bemi): Create an automatic audit trail with Bemi - [ClickHouse](https://docs.peerdb.io/mirror/cdc-neon-clickhouse): Change Data Capture from Neon to ClickHouse with PeerDB (PeerDB docs) - [Confluent (Kafka)](https://neon.com/docs/guides/logical-replication-kafka-confluent): Replicate data from Neon with Confluent (Kafka) - [Decodable](https://neon.com/docs/guides/logical-replication-decodable): Replicate data from Neon with Decodable - [Estuary Flow](https://neon.com/docs/guides/logical-replication-estuary-flow): Replicate data from Neon with Estuary Flow - [Fivetran](https://neon.com/docs/guides/logical-replication-fivetran): Replicate data from Neon with Fivetran - [Materialize](https://neon.com/docs/guides/logical-replication-materialize): Replicate data from Neon to Materialize - [Neon to Neon](https://neon.com/docs/guides/logical-replication-neon-to-neon): Replicate data from Neon to Neon - [Neon to PostgreSQL](https://neon.com/docs/guides/logical-replication-postgres): Replicate data from Neon to PostgreSQL - [Prisma Pulse](https://neon.com/docs/guides/logical-replication-prisma-pulse): Stream database changes in real-time with Prisma Pulse - [Sequin](https://neon.com/docs/guides/sequin): Stream changes and rows from your database to anywhere with Sequin - [Snowflake](https://neon.com/docs/guides/logical-replication-airbyte-snowflake): Replicate data from Neon to Snowflake with Airbyte - [Inngest](https://neon.com/docs/guides/logical-replication-inngest): Replicate data from Neon to Inngest ## Replicate data to Neon - [AlloyDB](https://neon.com/docs/guides/logical-replication-alloydb): Replicate data from AlloyDB to Neon - [Cloud SQL](https://neon.com/docs/guides/logical-replication-cloud-sql): Replicate data from Cloud SQL to Neon - [Neon to Neon](https://neon.com/docs/guides/logical-replication-neon-to-neon): Replicate data from Neon to Neon - [PostgreSQL to Neon](https://neon.com/docs/guides/logical-replication-postgres-to-neon): Replicate data from PostgreSQL to Neon - [RDS](https://neon.com/docs/guides/logical-replication-rds-to-neon): Replicate data from AWS RDS PostgreSQL to Neon ## Schema Migration - [Django](https://neon.com/docs/guides/django-migrations): Connect a Django application to Neon - [Drizzle](https://neon.com/docs/guides/drizzle-migrations): Schema migration with Neon Postgres and Drizzle ORM - [Entity Framework](https://neon.com/docs/guides/entity-migrations): Schema migration with Neon and Entity Framework - [Flyway](https://neon.com/docs/guides/flyway): Use Flyway with Neon - [Laravel](https://neon.com/docs/guides/laravel): Connect from Laravel to Neon - [Liquibase](https://neon.com/docs/guides/liquibase): Use Liquibase with Neon - [Prisma](https://neon.com/docs/guides/prisma-migrations): Schema migration with Neon Postgres and Prisma ORM - [Rails](https://neon.com/docs/guides/rails-migrations): Connect a Rails application to Neon - [Sequelize](https://neon.com/docs/guides/sequelize): Schema migration with Neon Postgres and Sequelize - [SQLAlchemy](https://neon.com/docs/guides/sqlalchemy): Connect an SQLAlchemy application to Neon ## Authenticate - [Auth0](https://neon.com/docs/guides/auth-auth0): Authenticate Neon Postgres application users with Auth0 - [Auth.js](https://neon.com/docs/guides/auth-authjs): Authenticate Neon Postgres application users with Auth.js - [Clerk](https://neon.com/docs/guides/auth-clerk): Authenticate Neon Postgres application users with Clerk - [Okta](https://neon.com/docs/guides/auth-okta): Authenticate Neon Postgres application users with Okta --- # Source: https://neon.com/llms/guides-java.txt # Connect a Java application to Neon Postgres > This document guides users on connecting a Java application to a Neon database by detailing the necessary configurations and steps for establishing a successful connection. ## Source - [Connect a Java application to Neon Postgres HTML](https://neon.com/docs/guides/java): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Java application using **Java Database Connectivity (JDBC)**, the standard API for interacting with relational databases in Java. You will learn how to set up a project, connect to your database, and perform basic create, read, update, and delete (CRUD) operations. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - [Java Development Kit (JDK) 17](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) or later. - [Apache Maven](https://maven.apache.org/install.html) to manage project dependencies. ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your Java application. ## Create a Java project Create a project using the Maven `archetype:generate` command. This sets up a standard Java project structure. 1. Run the following command in your terminal to generate a new Maven project. This command creates a simple Java project with the `maven-archetype-quickstart` archetype. ```bash mvn archetype:generate \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-quickstart \ -DarchetypeVersion=1.5 \ -DgroupId=com.neon.quickstart \ -DartifactId=neon-java-jdbc \ -DinteractiveMode=false ``` 2. Change into the newly created project directory. ```bash cd neon-java-jdbc ``` > Open this directory in your preferred code editor (e.g., VS Code, IntelliJ IDEA). 3. Add the `postgresql` driver and `dotenv-java` libraries as dependencies in your `pom.xml` file. There may be other dependencies already present (e.g, `junit`), so ensure you add these within the `` section. ```xml org.postgresql postgresql 42.7.3 io.github.cdimascio dotenv-java 3.2.0 ``` **Note** Note: Make sure to add this to the `` section. A common mistake is adding it to ``, which only declares a version but doesn't actually include the library in your build. Save the file. 4. Compile the project to download the dependencies. ```bash mvn clean compile ``` This command compiles your Java code and downloads the required dependencies specified in `pom.xml`. ## Store your Neon connection string Create a file named `.env` in your project's root directory. This file will securely store your database connection string. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Select **Java** as your programming language. 4. Copy the connection string, which includes your password. 5. Create a file named `.env` in your project's root directory and add the connection string to it as shown below: ```text DATABASE_URL="jdbc:postgresql://[neon_hostname]/[dbname]?user=[user]&password=[password]&sslmode=require&channelBinding=require" ``` > Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your actual database credentials. ## Examples This section provides code examples for performing CRUD operations. The examples should be placed inside `src/main/java/com/neon/quickstart/`. ### Create a table and insert data Create a file named `CreateTable.java`. This class connects to your database, creates a table, and inserts data. ```java package com.neon.quickstart; import io.github.cdimascio.dotenv.Dotenv; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.Statement; public class CreateTable { public static void main(String[] args) { Dotenv dotenv = Dotenv.load(); String connString = dotenv.get("DATABASE_URL"); try (Connection conn = DriverManager.getConnection(connString)) { System.out.println("Connection established"); try (Statement stmt = conn.createStatement()) { // Drop the table if it already exists stmt.execute("DROP TABLE IF EXISTS books;"); System.out.println("Finished dropping table (if it existed)."); // Create a new table stmt.execute(""" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); """); System.out.println("Finished creating table."); // Insert a single book record String insertOneSql = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (?, ?, ?, ?);"; try (PreparedStatement pstmt = conn.prepareStatement(insertOneSql)) { pstmt.setString(1, "The Catcher in the Rye"); pstmt.setString(2, "J.D. Salinger"); pstmt.setInt(3, 1951); pstmt.setBoolean(4, true); pstmt.executeUpdate(); System.out.println("Inserted a single book."); } // Insert multiple books String insertManySql = "INSERT INTO books (title, author, publication_year, in_stock) VALUES (?, ?, ?, ?);"; try (PreparedStatement pstmt = conn.prepareStatement(insertManySql)) { Object[][] booksToInsert = { {"The Hobbit", "J.R.R. Tolkien", 1937, true}, {"1984", "George Orwell", 1949, true}, {"Dune", "Frank Herbert", 1965, false} }; for (Object[] book : booksToInsert) { pstmt.setString(1, (String) book[0]); pstmt.setString(2, (String) book[1]); pstmt.setInt(3, (Integer) book[2]); pstmt.setBoolean(4, (Boolean) book[3]); pstmt.addBatch(); } pstmt.executeBatch(); System.out.println("Inserted 3 rows of data."); } } } catch (Exception e) { System.out.println("Connection failed."); e.printStackTrace(); } } } ``` The above code does the following: - Connects to the Neon database using the connection string from the `.env` file. - Drops the `books` table if it already exists. - Creates a new `books` table with columns for `id`, `title`, `author`, `publication_year`, and `in_stock`. - Inserts a single book record. - Inserts multiple book records in a batch operation. Run the code to create the table and insert the data using the following command: ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.CreateTable" ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data Create a file named `ReadData.java`. This class fetches all rows from the `books` table and prints them. ```java package com.neon.quickstart; import io.github.cdimascio.dotenv.Dotenv; import java.sql.*; public class ReadData { public static void main(String[] args) { Dotenv dotenv = Dotenv.load(); String connString = dotenv.get("DATABASE_URL"); try (Connection conn = DriverManager.getConnection(connString); Statement stmt = conn.createStatement()) { System.out.println("Connection established"); String sql = "SELECT * FROM books ORDER BY publication_year;"; try (ResultSet rs = stmt.executeQuery(sql)) { System.out.println("\n--- Book Library ---"); while (rs.next()) { System.out.printf("ID: %d, Title: %s, Author: %s, Year: %d, In Stock: %b%n", rs.getInt("id"), rs.getString("title"), rs.getString("author"), rs.getInt("publication_year"), rs.getBoolean("in_stock")); } System.out.println("--------------------\n"); } } catch (Exception e) { System.out.println("Connection failed."); e.printStackTrace(); } } } ``` The above code does the following: - Connects to the Neon database using the connection string from the `.env` file. - Executes a SQL query to select all rows from the `books` table, ordered by `publication_year`. - Iterates through the result set and prints each book's details. Run the code to read the data using the following command: ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.ReadData" ``` When the read logic runs, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: false -------------------- ``` ### Update data Create a file named `UpdateData.java` to update the stock status of 'Dune' to `True`. ```java package com.neon.quickstart; import io.github.cdimascio.dotenv.Dotenv; import java.sql.*; public class UpdateData { public static void main(String[] args) { Dotenv dotenv = Dotenv.load(); String connString = dotenv.get("DATABASE_URL"); String sql = "UPDATE books SET in_stock = ? WHERE title = ?;"; try (Connection conn = DriverManager.getConnection(connString); PreparedStatement pstmt = conn.prepareStatement(sql)) { System.out.println("Connection established"); pstmt.setBoolean(1, true); pstmt.setString(2, "Dune"); int rowsAffected = pstmt.executeUpdate(); if (rowsAffected > 0) { System.out.println("Updated stock status for 'Dune'."); } } catch (Exception e) { System.out.println("Connection failed."); e.printStackTrace(); } } } ``` The above code does the following: - Connects to the Neon database. - Prepares an SQL `UPDATE` statement to set the `in_stock` status of the book 'Dune' to `true`. - Executes the update and prints a confirmation message if successful. Run the code to update the data using the following command: ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.UpdateData" ``` After running the update, verify the change by running the `ReadData` class again. ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.ReadData" ``` The updated output will be: ```text --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `true`. ### Delete data Create a file named `DeleteData.java` to delete the book '1984' from the table. ```java package com.neon.quickstart; import io.github.cdimascio.dotenv.Dotenv; import java.sql.*; public class DeleteData { public static void main(String[] args) { Dotenv dotenv = Dotenv.load(); String connString = dotenv.get("DATABASE_URL"); String sql = "DELETE FROM books WHERE title = ?;"; try (Connection conn = DriverManager.getConnection(connString); PreparedStatement pstmt = conn.prepareStatement(sql)) { System.out.println("Connection established"); pstmt.setString(1, "1984"); int rowsAffected = pstmt.executeUpdate(); if (rowsAffected > 0) { System.out.println("Deleted the book '1984' from the table."); } } catch (Exception e) { System.out.println("Connection failed."); e.printStackTrace(); } } } ``` The above code does the following: - Connects to the Neon database. - Prepares an SQL `DELETE` statement to remove the book '1984'. - Executes the delete and prints a confirmation message if successful. Run the code to delete the data using the following command: ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.DeleteData" ``` After running the delete, verify the change by running the `ReadData` class again. ```bash mvn exec:java -Dexec.mainClass="com.neon.quickstart.ReadData" ``` The final output will be: ```text --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Next steps: Using an ORM or framework While this guide demonstrates how to connect to Neon using raw SQL queries, for more advanced and maintainable data interactions in your Java applications, consider using an Object-Relational Mapping (ORM) framework. ORMs not only let you work with data as objects but also help manage schema changes through automated migrations keeping your database structure in sync with your application models. Explore the following resources to learn how to integrate ORMs with Neon: - [Database Schema Changes with Hibernate, Spring Boot, and Neon](https://neon.com/guides/spring-boot-hibernate) ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Java and Neon using JDBC](https://github.com/neondatabase/examples/tree/main/with-java-jdbc): Get started with Java and Neon using standard JDBC. ## Resources - [PostgreSQL JDBC Driver Documentation](https://jdbc.postgresql.org/documentation/use/) - [Apache Maven Quickstart](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) - [Database Migrations in Spring Boot with Flyway and Neon](https://neon.com/guides/spring-boot-flyway) --- # Source: https://neon.com/llms/guides-javascript.txt # Connect a JavaScript/Node.js application to Neon Postgres > This document guides users on connecting a JavaScript application to Neon by detailing the necessary steps and configurations for establishing a database connection using Node.js and relevant libraries. ## Source - [Connect a JavaScript/Node.js application to Neon Postgres HTML](https://neon.com/docs/guides/javascript): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Node.js application using popular Postgres clients: - **[node-postgres (pg)](https://www.npmjs.com/package/pg)**: The most widely-used and robust driver for Node.js. - **[Postgres.js](https://www.npmjs.com/package/postgres)**: A modern, high-performance driver with a focus on a great developer experience. - **[@neondatabase/serverless](https://www.npmjs.com/package/@neondatabase/serverless)**: The Neon serverless driver, which connects over HTTP and is optimized for serverless and edge environments. You'll learn how to connect to your Neon database from a JavaScript application and perform basic Create, Read, Update, and Delete (CRUD) operations. **Important** Connect from the Server-Side Only: Your database connection string contains sensitive credentials and must **never** be exposed in client-side javascript code (e.g., in a browser). All database operations should be handled in a secure, server-side environment like a Node.js backend or a serverless function. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - [Node.js](https://nodejs.org/) v18 or later. ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your JavaScript application. ## Create a Node.js project For your Node.js project, create a project directory, initialize it with `npm`, and install the required libraries. 1. Create a project directory and change into it. ```bash mkdir neon-nodejs-quickstart cd neon-nodejs-quickstart ``` > Open the directory in your preferred code editor (e.g., VS Code). 2. Initialize a new Node.js project. The `-y` flag accepts all the default settings. ```bash npm init -y ``` 3. Install the required libraries using `npm`. Tab: node-postgres (pg) ```bash npm install pg dotenv ``` Tab: Neon serverless driver ```bash npm install @neondatabase/serverless dotenv ``` Tab: postgres.js ```bash npm install postgres dotenv ``` 4. Open your `package.json` file and add the following line into it: ```json { // other properties "type": "module" } ``` This allows you to use ES module syntax (`import`) in your JavaScript files. ## Store your Neon connection string Create a file named `.env` in your project's root directory. This file will securely store your database connection string, keeping your credentials separate from your source code. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Select **Node.js** from the connection string dropdown and copy the full connection string. 4. Add the connection string to your `.env` file as shown below. ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` > Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your actual database credentials. ## Examples This section provides example JavaScript scripts that demonstrate how to connect to your Neon database and perform basic operations such as [creating a table](https://neon.com/docs/guides/javascript#create-a-table-and-insert-data), [reading data](https://neon.com/docs/guides/javascript#read-data), [updating data](https://neon.com/docs/guides/javascript#update-data), and [deleting data](https://neon.com/docs/guides/javascript#deleting-data). ### Create a table and insert data In your project directory, create a file named `create_table.js` and add the code for your preferred library. This script connects to your Neon database, creates a table named `books`, and inserts some sample data into it. Tab: node-postgres (pg) ```javascript import 'dotenv/config'; import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { require: true, }, }); async function setup() { const client = await pool.connect(); try { console.log('Connection established'); // Drop the table if it already exists await client.query('DROP TABLE IF EXISTS books;'); console.log('Finished dropping table (if it existed).'); // Create a new table await client.query(` CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); `); console.log('Finished creating table.'); // Insert a single book record await client.query( 'INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);', ['The Catcher in the Rye', 'J.D. Salinger', 1951, true] ); console.log('Inserted a single book.'); // Data to be inserted const booksToInsert = [ { title: 'The Hobbit', author: 'J.R.R. Tolkien', year: 1937, in_stock: true }, { title: '1984', author: 'George Orwell', year: 1949, in_stock: true }, { title: 'Dune', author: 'Frank Herbert', year: 1965, in_stock: false }, ]; // Insert multiple books for (const book of booksToInsert) { await client.query( 'INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);', [book.title, book.author, book.year, book.in_stock] ); } console.log('Inserted 3 rows of data.'); } catch (err) { console.error('Connection failed.', err.stack); } finally { client.release(); pool.end(); } } setup(); ``` Tab: Neon serverless driver ```javascript import 'dotenv/config'; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); async function setup() { try { console.log('Connection established'); // Drop the table if it already exists await sql`DROP TABLE IF EXISTS books;`; console.log('Finished dropping table (if it existed).'); // Create a new table await sql` CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); `; console.log('Finished creating table.'); // Insert a single book record await sql` INSERT INTO books (title, author, publication_year, in_stock) VALUES ('The Catcher in the Rye', 'J.D. Salinger', 1951, true); `; console.log('Inserted a single book.'); // Data to be inserted const booksToInsert = [ { title: 'The Hobbit', author: 'J.R.R. Tolkien', publication_year: 1937, in_stock: true }, { title: '1984', author: 'George Orwell', publication_year: 1949, in_stock: true }, { title: 'Dune', author: 'Frank Herbert', publication_year: 1965, in_stock: false }, ]; // Insert multiple books await sql` INSERT INTO books (title, author, publication_year, in_stock) VALUES (${booksToInsert[0].title}, ${booksToInsert[0].author}, ${booksToInsert[0].publication_year}, ${booksToInsert[0].in_stock}), (${booksToInsert[1].title}, ${booksToInsert[1].author}, ${booksToInsert[1].publication_year}, ${booksToInsert[1].in_stock}), (${booksToInsert[2].title}, ${booksToInsert[2].author}, ${booksToInsert[2].publication_year}, ${booksToInsert[2].in_stock}); `; console.log('Inserted 3 rows of data.'); } catch (err) { console.error('Connection failed.', err); } } setup(); ``` Tab: postgres.js ```javascript import 'dotenv/config'; import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require', }); async function setup() { try { console.log('Connection established'); // Drop the table if it already exists await sql`DROP TABLE IF EXISTS books;`; console.log('Finished dropping table (if it existed).'); // Create a new table await sql` CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); `; console.log('Finished creating table.'); // Insert a single book record await sql` INSERT INTO books (title, author, publication_year, in_stock) VALUES ('The Catcher in the Rye', 'J.D. Salinger', 1951, true); `; console.log('Inserted a single book.'); // Data to be inserted const booksToInsert = [ { title: 'The Hobbit', author: 'J.R.R. Tolkien', publication_year: 1937, in_stock: true }, { title: '1984', author: 'George Orwell', publication_year: 1949, in_stock: true }, { title: 'Dune', author: 'Frank Herbert', publication_year: 1965, in_stock: false }, ]; // Insert multiple books await sql` INSERT INTO books ${sql(booksToInsert, 'title', 'author', 'publication_year', 'in_stock')} `; console.log('Inserted 3 rows of data.'); } catch (err) { console.error('Connection failed.', err); } finally { sql.end(); } } setup(); ``` The above code does the following: - Loads the connection string from the `.env` file. - Connects to the Neon database. - Drops the `books` table if it already exists to ensure a clean slate. - Creates a table named `books` with columns for `id`, `title`, `author`, `publication_year`, and `in_stock`. - Inserts a single book record and then multiple book records. Run the script using the command for your runtime: ```bash node create_table.js ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data In your project directory, create a file named `read_data.js`. This script connects to your Neon database and retrieves all rows from the `books` table. Tab: node-postgres (pg) ```javascript import 'dotenv/config'; import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { require: true, }, }); async function readData() { const client = await pool.connect(); try { console.log('Connection established'); // Fetch all rows from the books table const { rows } = await client.query('SELECT * FROM books ORDER BY publication_year;'); console.log('\n--- Book Library ---'); rows.forEach((row) => { console.log( `ID: ${row.id}, Title: ${row.title}, Author: ${row.author}, Year: ${row.publication_year}, In Stock: ${row.in_stock}` ); }); console.log('--------------------\n'); } catch (err) { console.error('Connection failed.', err.stack); } finally { client.release(); pool.end(); } } readData(); ``` Tab: Neon serverless driver ```javascript import 'dotenv/config'; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); async function readData() { try { console.log('Connection established'); // Fetch all rows from the books table const books = await sql`SELECT * FROM books ORDER BY publication_year;`; console.log('\n--- Book Library ---'); books.forEach((book) => { console.log( `ID: ${book.id}, Title: ${book.title}, Author: ${book.author}, Year: ${book.publication_year}, In Stock: ${book.in_stock}` ); }); console.log('--------------------\n'); } catch (err) { console.error('Connection failed.', err); } } readData(); ``` Tab: postgres.js ```javascript import 'dotenv/config'; import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require', }); async function readData() { try { console.log('Connection established'); // Fetch all rows from the books table const books = await sql`SELECT * FROM books ORDER BY publication_year;`; console.log('\n--- Book Library ---'); books.forEach((book) => { console.log( `ID: ${book.id}, Title: ${book.title}, Author: ${book.author}, Year: ${book.publication_year}, In Stock: ${book.in_stock}` ); }); console.log('--------------------\n'); } catch (err) { console.error('Connection failed.', err); } finally { sql.end(); } } readData(); ``` Run the script using the command for your runtime: ```bash node read_data.js ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: false -------------------- ``` ### Update data In your project directory, create a file named `update_data.js`. This script connects to your Neon database and updates the stock status of the book 'Dune' to `true`. Tab: node-postgres (pg) ```javascript import 'dotenv/config'; import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { require: true, }, }); async function updateData() { const client = await pool.connect(); try { console.log('Connection established'); // Update a data row in the table await client.query('UPDATE books SET in_stock = $1 WHERE title = $2;', [true, 'Dune']); console.log("Updated stock status for 'Dune'."); } catch (err) { console.error('Connection failed.', err.stack); } finally { client.release(); pool.end(); } } updateData(); ``` Tab: Neon serverless driver ```javascript import 'dotenv/config'; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); async function updateData() { try { console.log('Connection established'); // Update a data row in the table await sql`UPDATE books SET in_stock = ${true} WHERE title = ${'Dune'}`; console.log("Updated stock status for 'Dune'."); } catch (err) { console.error('Connection failed.', err); } } updateData(); ``` Tab: postgres.js ```javascript import 'dotenv/config'; import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require', }); async function updateData() { try { console.log('Connection established'); // Update a data row in the table await sql`UPDATE books SET in_stock = ${true} WHERE title = ${'Dune'}`; console.log("Updated stock status for 'Dune'."); } catch (err) { console.error('Connection failed.', err); } finally { sql.end(); } } updateData(); ``` Run the script using the command for your runtime: ```bash node update_data.js ``` After running this script, you can run `read_data.js` again to verify the change. ```bash node read_data.js ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `true`. ### Delete data In your project directory, create a file named `delete_data.js`. This script connects to your Neon database and deletes the book '1984' from the `books` table. Tab: node-postgres (pg) ```javascript import 'dotenv/config'; import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: { require: true, }, }); async function deleteData() { const client = await pool.connect(); try { console.log('Connection established'); // Delete a data row from the table await client.query('DELETE FROM books WHERE title = $1;', ['1984']); console.log("Deleted the book '1984' from the table."); } catch (err) { console.error('Connection failed.', err.stack); } finally { client.release(); pool.end(); } } deleteData(); ``` Tab: Neon serverless driver ```javascript import 'dotenv/config'; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); async function deleteData() { try { console.log('Connection established'); // Delete a data row from the table await sql`DELETE FROM books WHERE title = ${'1984'}`; console.log("Deleted the book '1984' from the table."); } catch (err) { console.error('Connection failed.', err); } } deleteData(); ``` Tab: postgres.js ```javascript import 'dotenv/config'; import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require', }); async function deleteData() { try { console.log('Connection established'); // Delete a data row from the table await sql`DELETE FROM books WHERE title = ${'1984'}`; console.log("Deleted the book '1984' from the table."); } catch (err) { console.error('Connection failed.', err); } finally { sql.end(); } } deleteData(); ``` Run the script using the command for your runtime: ```bash node delete_data.js ``` After running this script, you can run `read_data.js` again to verify that the row has been deleted. ```bash node read_data.js ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Next steps: Using an ORM or framework While this guide demonstrates raw SQL queries, for more advanced and maintainable data interactions, consider using an Object-Relational Mapper (ORM) or query builder. ORMs let you work with your data as objects and manage schema changes through migrations, keeping your database structure in sync with your application models. Explore these guides to integrate popular data tools with Neon: - [Connect with Prisma](https://neon.com/docs/guides/prisma) - [Connect with Drizzle ORM](https://neon.com/docs/guides/drizzle) - [Connect with TypeORM](https://neon.com/docs/guides/typeorm) - [Connect with Sequelize](https://neon.com/docs/guides/sequelize) ## Using Bun or Deno If you are using Bun or Deno, you can also connect to Neon databases using the Neon serverless driver or other Postgres clients. Follow these guides for more information: - [Connect with Bun](https://neon.com/docs/guides/bun) - [Connect with Deno](https://neon.com/docs/guides/deno) ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with node-postgres (pg)](https://github.com/neondatabase/examples/tree/main/with-node-postgres): Get started with Node.js and Neon using node-postgres (pg) - [Get started with the Neon Serverless Driver](https://github.com/neondatabase/examples/tree/main/with-neon-serverless): Get started with Node.js and the Neon Serverless Driver - [Get started with postgres.js](https://github.com/neondatabase/examples/tree/main/with-node-postgres-js): Get started with Node.js and Neon using postgres.js ## Resources - [Neon Serverless Driver Documentation](https://github.com/neondatabase/serverless) - [node-postgres (pg) documentation](https://node-postgres.com/) - [Postgres.js documentation](https://github.com/porsager/postgres) --- # Source: https://neon.com/llms/guides-knex.txt # Connect from Knex to Neon > The document outlines the steps required to establish a connection between Knex.js, a SQL query builder for Node.js, and Neon, detailing configuration settings and connection parameters specific to Neon's database environment. ## Source - [Connect from Knex to Neon HTML](https://neon.com/docs/guides/knex): The original HTML version of this documentation Knex is an open-source SQL query builder for Postgres. This guide covers the following topics: - [Connect to Neon from Knex](https://neon.com/docs/guides/knex#connect-to-neon-from-knex) - [Use connection pooling with Knex](https://neon.com/docs/guides/knex#use-connection-pooling-with-knex) - [Performance tips](https://neon.com/docs/guides/knex#performance-tips) ## Connect to Neon from Knex To establish a basic connection from Knex to Neon, perform the following steps: 1. Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Select a branch, a user, and the database you want to connect to. A connection string is constructed for you. The connection string includes the user name, password, hostname, and database name. 2. Update the Knex's initialization in your application to the following: ```typescript {2-5} export const client = knex({ client: 'pg', connection: { connectionString: process.env.DATABASE_URL, }, }); ``` 3. Add a `DATABASE_URL` variable to your `.env` file and set it to the Neon connection string that you copied in the previous step. We also recommend adding `?sslmode=require&channel_binding=require` to the end of the connection string to ensure a [secure connection](https://neon.com/docs/connect/connect-securely). Your setting will appear similar to the following: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` ## Use connection pooling with Knex Serverless functions can require a large number of database connections as demand increases. If you use serverless functions in your application, we recommend that you use a pooled Neon connection string, as shown: ```ini # Pooled Neon connection string DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` A pooled Neon connection string adds `-pooler` to the endpoint ID, which tells Neon to use a pooled connection. You can add `-pooler` to your connection string manually or copy a pooled connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Enable the **Connection pooling** toggle to add the `-pooler` suffix. ## Performance tips This section outlines performance optimizations you can try when using Knex with Neon. ### Enabling NODE_PG_FORCE_NATIVE Knex leverages a [node-postgres](https://node-postgres.com) Pool instance to connect to your Postgres database. Installing [pg-native](https://npmjs.com/package/pg-native) and setting the `NODE_PG_FORCE_NATIVE` environment variable to `true` [switches the `pg` driver to `pg-native`](https://github.com/brianc/node-postgres/blob/master/packages/pg/lib/index.js#L31-L34), which can produce noticeably faster response times according to some users. ### Replacing query parameters You may be able to achieve better performance with Knex by replacing any parameters you've defined in your queries, as performed by the following function, for example: ```tsx // Function to replace query parameters in a query function replaceQueryParams(query, values) { let replacedQuery = query; values.forEach((tmpParameter) => { if (typeof tmpParameter === 'string') { replacedQuery = replacedQuery.replace('?', `'${tmpParameter}'`); } else { replacedQuery = replacedQuery.replace('?', tmpParameter); } }); return replacedQuery; } // So instead of this await client.raw(text, values); // Do this to get better performance await client.raw(replaceQueryParams(text, values)); ``` You can try this optimization yourself by downloading our [Get started with Knex example](https://neon.com/docs/guides/knex#examples) and running `npm run test`. ## Examples - [Get started with Knex and Neon](https://github.com/neondatabase/examples/tree/main/with-knex) --- # Source: https://neon.com/llms/guides-koyeb.txt # Use Neon with Koyeb > The document outlines the steps for integrating Neon with Koyeb, detailing how to deploy a Neon database and connect it to applications hosted on the Koyeb platform. ## Source - [Use Neon with Koyeb HTML](https://neon.com/docs/guides/koyeb): The original HTML version of this documentation [Koyeb](https://www.koyeb.com/) is a developer-friendly, serverless platform designed to easily deploy reliable and scalable applications globally. Koyeb offers native autoscaling, automatic HTTPS (SSL), auto-healing, and global load-balancing across their edge network with zero configuration. This guide describes how connect a Neon Postgres database to an application deployed with Koyeb. To follow the instructions in this guide, you require: - A [Koyeb account](https://app.koyeb.com/) to deploy the application. Alternatively, you can install the [Koyeb CLI](https://www.koyeb.com/docs/quickstart/koyeb-cli) if you prefer to deploy the application from your terminal. - A Neon account to deploy the Postgres database. If you do not have one, see [Sign up](https://neon.com/docs/get-started/signing-up). The example application connects to your Neon Postgres database using [Prisma](https://www.prisma.io/) as an ORM. Prisma synchronizes the database schema with the Prisma schema included with the application and seeds the database. ## Create a Neon project 1. Navigate to the [Neon Console](https://console.neon.tech/). 1. Select **Create a project**. 1. Enter a name for the project (`neon-koyeb`, for example), and select a Postgres version and region. 1. Click **Create project**. A dialog pops up with your Neon connection string, which appears similar to the following: ```bash postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` Store this value in a safe place. It is required later. The connection string specifies `neondb` as the database. This is the database created with your Neon project if you did not specify a different database name. You will use this database with the example application. ## Deploy the application on Koyeb You can deploy on Koyeb using the control panel or the Koyeb CLI. ### From the Koyeb control panel To deploy the application from the Koyeb [control panel](https://app.koyeb.com/), follow these steps: 1. Navigate to the `Apps` tab and select **Create App**. 1. Select GitHub as the deployment method. 1. When asked to select the repository to deploy, enter `https://github.com/koyeb/example-express-prisma` in the **Public GitHub repository** field. 1. Keep `example-express-prisma` as the name and `main` as the branch. 1. In **Build and deployment settings**, enable the **Override** setting and add the following **Build command**: `npm run postgres:init` 1. Select the region closest to your Neon database. 1. Under **Advanced** > **Environment variables**, add a `DATABASE_URL` environment variable to enable the application to connect to your Neon Postgres database. Set the value to the Neon connection string provided to you when you created the Neon project. 1. Enter a name for your app. For example, `express-neon` 1. Click **Deploy**. Koyeb builds the application. After the build and deployment have finished, you can access your application running on Koyeb by clicking the URL ending with `.koyeb.app`. The example application exposes a `/planets` endpoint that you can use to list planets from the database. After your deployment is live, you should see the following results when navigating to `https://.koyeb.app/planets`: ```json [ { "id": 1, "name": "Mercury" }, { "id": 2, "name": "Venus" }, { "id": 3, "name": "Mars" } ] ``` ### From the Koyeb CLI You can also deploy your application using the Koyeb CLI. To install it, follow the instructions in the [Koyeb CLI documentation](https://www.koyeb.com/docs/quickstart/koyeb-cli). Using the CLI requires an API access token, which you can generate in the Koyeb [control panel](https://app.koyeb.com/), under **Organization Settings** > **API**. Once generated, run the command `koyeb login` and enter the token when prompted. To deploy the example application, run the following command in your terminal. Make sure to replace the `DATABASE_URL` with your Neon connection string. ```bash koyeb apps init express-neon \ --instance-type free \ --git github.com/koyeb/example-express-prisma \ --git-branch main \ --git-build-command "npm run postgres:init" \ --ports 8080:http \ --routes /:8080 \ --env PORT=8080 \ --env DATABASE_URL="{}" ``` #### Access Koyeb deployment logs To track the app deployment and visualize build logs, execute the following command: ```bash koyeb service logs express-neon/express-neon -t build ``` #### Access your app After the build and deployment have finished, you can retrieve the public domain to access your application by running the following command: ```bash $ koyeb app get express-neon ID NAME STATUS DOMAINS CREATED AT b8611a1d express-neon HEALTHY ["express-neon-myorg.koyeb.app"] 16 Feb 23 18:13 UTC ``` The example application exposes a `/planets` endpoint that you can use to list planets from the database. After your deployment is live, you should see the following results when navigating to `https://.koyeb.app/planets`: ```json [ { "id": 1, "name": "Mercury" }, { "id": 2, "name": "Venus" }, { "id": 3, "name": "Mars" } ] ``` ## Delete the example application and Neon project To delete the example application on Koyeb to avoid incurring any charges, follow these steps: 1. From the Koyeb [control panel](https://app.koyeb.com/), select the **App** to delete. 1. On the **Settings** tab, select **Danger Zone** and click **Delete**. To delete your Neon project, refer to [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). --- # Source: https://neon.com/llms/guides-laravel-migrations.txt # Schema migration with Neon Postgres and Laravel > The document guides users on performing schema migrations using Neon Postgres with Laravel, detailing the setup and execution of database migrations within the Laravel framework in the context of Neon's cloud-based PostgreSQL service. ## Source - [Schema migration with Neon Postgres and Laravel HTML](https://neon.com/docs/guides/laravel-migrations): The original HTML version of this documentation [Laravel](https://laravel.com/) is a popular PHP web application framework that provides an expressive and elegant syntax for building web applications. It includes an ORM (Object-Relational Mapping) called Eloquent, which allows you to interact with databases using a fluent API. Laravel also provides a powerful migration system to manage database schema changes over time. This guide demonstrates how to use Laravel with the Neon Postgres database. We'll create a simple Laravel application and walk through the process of setting up the database, defining models, and generating and running migrations to manage schema changes. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [PHP](https://www.php.net/) installed on your local machine. This guide uses PHP 8.1, but you can use any recent version compatible with Laravel. - [Composer](https://getcomposer.org/) installed on your local machine for managing PHP dependencies. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the **New Project** button to create a new one. ### Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Important** Always use a direction connection for migrations: Neon supports both direct and pooled database connection strings, which you can find by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Keep your connection string handy for later use. ## Setting up the Laravel project ### Create a new Laravel project Open your terminal and navigate to the directory where you want to create your Laravel project. Run the following command to create a new Laravel project: ```bash composer create-project --prefer-dist laravel/laravel guide-neon-laravel ``` This command creates a new Laravel project named `guide-neon-laravel` in the current directory. ### Set up the Database configuration Open the `.env` file in the project root directory and update the following database connection variables: ```bash DB_CONNECTION=pgsql DB_PORT=5432 DATABASE_URL=NEON_POSTGRES_CONNECTION_STRING ``` Replace `NEON_POSTGRES_CONNECTION_STRING` with the connection string you retrieved from the Neon Console earlier. The `DB_CONNECTION` should be set to `pgsql` to indicate that we are using a Postgres database. ## Defining data models and running migrations ### Specify the data model Data models are defined using the `Elquent` ORM in Laravel. Our application is a simple catalog of authors and books, where each author can have multiple books. We'll create two models, `Author` and `Book`, to represent the data. Create a new file `Author.php` in the `app/Models` directory with the following code: ```php hasMany(Book::class); } } ``` Create another file `Book.php` in the `app/Models` directory with the following code: ```php belongsTo(Author::class); } } ``` The `Author` model represents an author with fields for name and bio. The `Book` model represents a book with fields for title and author (as a foreign key to the `Author` model). Laravel automatically creates an `id` field for each model as the primary key and manages the `created_at` and `updated_at` timestamps. ### Generate migration files To generate migration files for creating the `authors` and `books` tables, run the following commands in the terminal: ```bash php artisan make:migration create_authors_table php artisan make:migration create_books_table ``` These commands generate empty migration files in the `database/migrations` directory. Unlike frameworks such as Django, Laravel does not generate the schema automatically based on the model definitions. Instead, you define the schema in the migration files. Open the `create_authors_table` migration file and update the `up` method to define the table schema: ```php public function up() { Schema::create('authors', function (Blueprint $table) { $table->id(); $table->string('name'); $table->text('bio')->nullable(); $table->timestamps(); }); } ``` Similarly, open the `create_books_table` migration file and update the `up` method: ```php public function up() { Schema::create('books', function (Blueprint $table) { $table->id(); $table->string('title'); $table->unsignedBigInteger('author_id'); $table->timestamps(); $table->foreign('author_id')->references('id')->on('authors')->onDelete('cascade'); }); } ``` ### Apply the migration To apply the migration and create the corresponding tables in the Neon Postgres database, run the following command: ```bash php artisan migrate ``` This command executes the migration files and creates the `authors` and `books` tables in the database. ### Seed the database To populate the database with some initial data, we use Laravel's database seeding feature. Open the file `DatabaseSeeder.php` in the `database/seeders` directory and replace its contents with the following code: ```php 'J.R.R. Tolkien', 'bio' => 'The creator of Middle-earth and author of The Lord of the Rings.', 'books' => [ ['title' => 'The Fellowship of the Ring'], ['title' => 'The Two Towers'], ['title' => 'The Return of the King'], ], ], [ 'name' => 'George R.R. Martin', 'bio' => 'The author of the epic fantasy series A Song of Ice and Fire.', 'books' => [ ['title' => 'A Game of Thrones'], ['title' => 'A Clash of Kings'], ['title' => 'A Storm of Swords'], ], ], [ 'name' => 'J.K. Rowling', 'bio' => 'The creator of the Harry Potter series.', 'books' => [ ['title' => 'Harry Potter and the Philosopher\'s Stone'], ['title' => 'Harry Potter and the Chamber of Secrets'], ], ], ]; foreach ($authors as $authorData) { $author = Author::create([ 'name' => $authorData['name'], 'bio' => $authorData['bio'], ]); foreach ($authorData['books'] as $bookData) { $author->books()->create($bookData); } } } } ``` This seeder creates three authors and associates them with their corresponding books. To run this script and populate the database, run the following command in the terminal: ```bash php artisan db:seed ``` ## Implement the application ### Create routes and controllers We'll create two routes and corresponding controllers to display the authors and books in our application. Open the `routes/web.php` file and add the following routes: ```php ... use App\Http\Controllers\AuthorController; use App\Http\Controllers\BookController; ... Route::get('/authors', [AuthorController::class, 'index'])->name('authors.index'); Route::get('/books/{author}', [BookController::class, 'index'])->name('books.index'); ``` We define two routes: `/authors` to list all authors and `/books/{author}` to list books by a specific author. Now, create a new file `AuthorController.php` in the `app/Http/Controllers` directory with the following code: ```php json($authors); } } ``` Similarly, create another file `BookController.php` in the `app/Http/Controllers` directory with the following code: ```php books; return response()->json($books); } } ``` These controllers define the `index` action to retrieve all authors and books by a specific author, respectively. The data is returned as JSON responses. ### Run the Laravel development server To start the Laravel development server and test the application, run the following command: ```bash php artisan serve ``` Navigate to the url `http://localhost:8000/authors` in your browser to view the list of authors. You can also view the books by a specific author by visiting `http://localhost:8000/books/{author_id}`. ## Applying schema changes We will demonstrate how to handle schema changes by adding a new field `country` to the `Author` model, which will store the author's country of origin. ### Update the data model Open the `Author.php` file in the `app/Models` directory and add the `country` field to the `$fillable` property: ```php protected $fillable = ['name', 'bio', 'country']; ``` ### Generate and run the migration To generate a new migration file for the schema change, run the following command: ```bash php artisan make:migration add_country_to_authors_table ``` This command generates a new migration file in the `database/migrations` directory. Open the generated migration file and update the `up` method to add the new `country` column: ```php public function up() { Schema::table('authors', function (Blueprint $table) { $table->string('country')->nullable()->after('bio'); }); } ``` Now, to apply the migration, run the following command: ```bash php artisan migrate ``` ### Test the schema change Restart the Laravel development server: ```bash php artisan serve ``` Navigate to the url `http://localhost:8000/authors` to view the list of authors. Each author entry now includes the `country` field set to `null`, reflecting the schema change. ## Conclusion In this guide, we demonstrated how to set up a Laravel project with `Neon` Postgres, define database models using Eloquent, generate migrations, and run them. Laravel's Eloquent ORM and migration system make it easy to interact with the database and manage schema evolution over time. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Laravel](https://github.com/neondatabase/guide-neon-laravel): Run Neon database migrations in a Laravel project ## Resources For more information on the tools and concepts used in this guide, refer to the following resources: - [Laravel Documentation](https://laravel.com/docs) - [Neon Postgres](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/guides-laravel.txt # Connect from Laravel to Neon > This document guides Neon users on configuring and connecting a Laravel application to a Neon database, detailing the necessary steps and settings for seamless integration. ## Source - [Connect from Laravel to Neon HTML](https://neon.com/docs/guides/laravel): The original HTML version of this documentation Laravel is a web application framework with expressive, elegant syntax. Connecting to Neon from Laravel is the same as connecting to a standalone Postgres installation from Laravel. Only the connection details differ. To connect to Neon from Laravel: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Configure the connection Open the `.env` file in your Laravel app, and replace all the database credentials. ```shell DB_CONNECTION=pgsql DB_HOST=[neon_hostname] DB_PORT=5432 DB_DATABASE=[dbname] DB_USERNAME=[user] DB_PASSWORD=[password] ``` You can find your database connection details by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Connection issues With older Postgres clients/drivers, including older PDO_PGSQL drivers, you may receive the following error when attempting to connect to Neon: ```txt ERROR: The endpoint ID is not specified. Either upgrade the Postgres client library (libpq) for SNI support or pass the endpoint ID (the first part of the domain name) as a parameter: '&options=endpoint%3D'. See [https://neon.com/sni](/sni) for more information. ``` If you run into this error, please see the following documentation for an explanation of the issue and workarounds: [The endpoint ID is not specified](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified). - If using a connection string to connect to your database, try [Workaround A. Pass the endpoint ID as an option](https://neon.com/docs/connect/connection-errors#a-pass-the-endpoint-id-as-an-option). For example: ```text postgresql://[user]:[password]@[neon_hostname]/[dbname]?options=endpoint%3D[endpoint-id] ``` Replace `[endpoint_id]` with your compute's endpoint ID, which you can find in your Neon connection string. It looks similar to this: `ep-cool-darkness-123456`. - If using database connection parameters, as shown above, try [Workaround D. Specify the endpoint ID in the password field](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field). For example: ```text DB_PASSWORD=endpoint=$ ``` ## Schema migration with Laravel For schema migration with Laravel, see our guide: - [Laravel Migrations](https://neon.com/docs/guides/laravel-migrations): Schema migration with Neon Postgres and Laravel --- # Source: https://neon.com/llms/guides-liquibase-workflow.txt # Liquibase developer workflow with Neon > The document outlines the process for integrating Liquibase into a developer workflow with Neon, detailing steps for managing database schema changes and version control within the Neon environment. ## Source - [Liquibase developer workflow with Neon HTML](https://neon.com/docs/guides/liquibase-workflow): The original HTML version of this documentation Liquibase is an open-source database-independent library for tracking, managing, and applying database schema changes. To learn more about Liquibase, refer to the [Liquibase documentation](https://docs.liquibase.com/home.html). This guide shows how to set up a developer workflow using Liquibase with Neon's branching feature. The workflow involves making schema changes to a database on a development branch and applying those changes back to the source database on the production branch of your Neon project. The instructions in this guide are based on the workflow described in the [Liquibase Developer Workflow](https://www.liquibase.org/get-started/developer-workflow) tutorial. ## Prerequisites - A Neon account. See [Sign up](https://neon.com/docs/get-started/signing-up). - A Neon project. See [Create your first project](https://neon.com/docs/get-started/setting-up-a-project). - Liquibase requires Java. For Liquibase Java requirements, see [Requirements](https://docs.liquibase.com/start/install/liquibase-requirements.html). To check if you have Java installed, run `java --version`, or `java -version` on macOS. - An installation of Liquibase. For instructions, refer to [Get started with Liquibase and Neon](https://neon.com/docs/guides/liquibase). ## Initialize a new Liquibase project Run the [init project](https://docs.liquibase.com/commands/init/project.html) command to initialize a Liquibase project in the specified directory. The project directory is created if it does not exist. Initializing a Liquibase project in this way provides you with a pre-populated Liquibase properties file, which we'll modify in a later step. ```bash liquibase init project --project-dir ~/blogdb ``` Enter `Y` to accept the defaults. ## Prepare a source database For demonstration purposes, create a `blog` database in Neon with two tables, `posts` and `authors`. 1. Open the [Neon Console](https://console.neon.tech/app/projects). 1. Select your project. 1. Select **Databases** from the sidebar and create a database named `blog`. For instructions, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). 1. Using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), add the following tables: ```sql -- Creating the `authors` table CREATE TABLE authors ( author_id SERIAL PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100), email VARCHAR(255) UNIQUE NOT NULL, bio TEXT ); -- Creating the `posts` table CREATE TABLE posts ( post_id SERIAL PRIMARY KEY, author_id INTEGER REFERENCES authors(author_id), title VARCHAR(255) NOT NULL, content TEXT, published_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); ``` ## Prepare a development database Now, let's prepare a development database in Neon by creating a development branch, where you can safely make changes to your database schema without affecting the source database on your `production` branch. A branch is a copy-on-write clone of the data in your Neon project, so it will include a copy of the `blog` database with the `authors` and `posts` tables that you just created. To create a branch: 1. In the Neon Console, select **Branches**. You will see your `production` branch, where you just created your `blog` database and tables. 2. Click **New Branch** to open the branch creation dialog. 3. Enter a name for the branch. Let's call it `feature/blog-schema`. 4. Leave `production` selected as the parent branch. This is where you created the `blog` database. 5. Leave the remaining default settings. Creating a branch from **Head** creates a branch with the latest data, and a compute is required to connect to the database on the branch. 6. Click **Create Branch** to create your branch. ## Retrieve your Neon database connection strings From the [Neon Console](https://console.neon.tech/app/projects), retrieve connection strings for your target and source databases by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Note**: The target database is the database on your `feature/blog-schema` branch where you will do your development work. Your source database is where you will apply your schema changes later, once you are satisfied with the changes on your development branch. 1. Select the `feature/blog-schema` branch, the `blog` database, and copy the connection string. ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/blog?sslmode=require&channel_binding=require ``` 2. Select the `production` branch, the `blog` database, and copy the connection string. ```bash postgresql://alex:AbC123dEf@ep-silent-hill-85675036.us-east-2.aws.neon.tech/blog?sslmode=require&channel_binding=require ``` Be careful not to mix up your connection strings. You'll see that the hostname (the part starting with `ep-` and ending in `neon.tech`) differs. This is because the `feature/blog-schema` branch is a separate instance of Postgres, hosted on its own compute. ## Update your liquibase.properties file The `liquibase.properties` file defines the location of the Liquibase changelog file and your target and source databases. 1. From your Liquibase project directory, open the `liquibase.properties` file, which comes pre-populated with example settings. 2. Change the `changeLogFile` setting as shown: ```bash changeLogFile=dbchangelog.xml ``` The [changelog file](https://docs.liquibase.com/parameters/changelog-file.html) is where you define database schema changes (changesets). 3. Change the target database `url`, `username`, and `password` settings to the correct values for the `blog` database on your `feature/blog-schema` branch. You can obtain the required details from the connection string you copied previously. You will need to swap out the hostname (`ep-cool-darkness-123456.us-east-2.aws.neon.tech`), username, and password for your own. ```bash liquibase.command.url=jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432/blog liquibase.command.username: alex liquibase.command.password: AbC123dEf ``` 4. Change the source database settings to the correct values for the `blog` database on your `production` branch. The username and password will be the same as your `feature/blog-schema` branch, but make sure to use the right hostname. Copy the snippet below and replace the hostname (`ep-silent-hill-85675036.us-east-2.aws.neon.tech`), username, and password for your own. ```bash liquibase.command.referenceUrl: jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog liquibase.command.referenceUsername: alex liquibase.command.referencePassword: AbC123dEf ``` ## Take a snapshot of your target database Capture the current state of your target database. The following command creates a Liquibase changelog file named `mydatabase_changelog.xml`. ```bash liquibase --changeLogFile=mydatabase_changelog.xml generateChangeLog ``` If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 09:23:33 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed. Some database objects and their dependencies cannot be represented automatically, and they may need to be manually updated before being deployed. Generated changelog written to mydatabase_changelog.xml Liquibase command 'generateChangelog' was executed successfully. ``` Check for the `mydatabase_changelog.xml` file in your Liquibase project directory. It should look something like this: ```xml ``` ## Create a schema change Now, you can start making database schema changes by creating [changesets](https://docs.liquibase.com/concepts/changelogs/changeset.html) and adding them to the changelog file you defined in your `liquibase.properties` file. A changeset is the basic unit of change in Liquibase. 1. Create the changelog file where you will add your schema changes: ```bash cd ~/blogdb touch dbchangelog.xml ``` 2. Add the following changeset to the `dbchangelog.xml` file, which adds a `comments` table to your database: ```xml ``` ### Deploy the schema change Run the [update](https://docs.liquibase.com/commands/update/update.html) command to deploy the schema change to your target database (your development database on the `feature/blog-schema` branch). ```bash liquibase update ``` If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 10:11:35 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase Running Changeset: dbchangelog.xml::myIDNumber1234::alex UPDATE SUMMARY Run: 1 Previously run: 0 Filtered out: 0 ------------------------------- Total change sets: 1 Liquibase: Update has been successful. Rows affected: 1 Liquibase command 'update' was executed successfully. ``` **Info**: When you run a changeset for the first time, Liquibase automatically creates two tracking tables in your database: - [databasechangelog](https://docs.liquibase.com/concepts/tracking-tables/databasechangelog-table.html): Tracks which changesets have been run. - [databasechangeloglock](https://docs.liquibase.com/concepts/tracking-tables/databasechangeloglock-table.html): Ensures only one instance of Liquibase runs at a time. You can verify these tables were created by viewing the `blog` database on your `feature/blog-schema` branch on the **Tables** page in the Neon Console. Select **Tables** from the sidebar. At this point, you can continue to iterate, applying schema changes to your database, until you are satisfied with the modified schema. ### Review schema changes It is a best practice to review schema changes before saving and applying them to your source database. You can run the [status](https://docs.liquibase.com/commands/change-tracking/status.html) command to see if there are any changesets that haven't been applied to the source database. Notice that the command specifies the hostname of the source database: ```bash liquibase --url=jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog status --verbose ``` Details: Command output If the command was successful, you'll see output similar to the following indicating that there is one changeset that has not been applied to the source database. This is your `comments` table changeset. ```bash Starting Liquibase at 12:30:51 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase 1 changesets have not been applied to alex@jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog dbchangelog.xml::myIDNumber1234::alex Liquibase command 'status' was executed successfully. ``` ### Check your SQL Before applying the update, you can run the [updateSQL](https://docs.liquibase.com/commands/update/update-sql.html) command to inspect the SQL Liquibase will apply when running the update command: ```bash liquibase --url=jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog updateSQL ``` Details: Command output If the command was successful, you'll see output similar to the following, which confirms that the changeset will create a `comments` table. ```bash Starting Liquibase at 12:32:55 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase SET SEARCH_PATH TO public, "$user","public"; -- Lock Database UPDATE public.databasechangeloglock SET LOCKED = TRUE, LOCKEDBY = 'dot-VBox (10.0.2.15)', LOCKGRANTED = NOW() WHERE ID = 1 AND LOCKED = FALSE; SET SEARCH_PATH TO public, "$user","public"; SET SEARCH_PATH TO public, "$user","public"; -- ********************************************************************* -- Update Database Script -- ********************************************************************* -- Change Log: dbchangelog.xml -- Ran at: 2023-10-08, 12:32 p.m. -- Against: alex@jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog -- Liquibase version: 4.24.0 -- ********************************************************************* SET SEARCH_PATH TO public, "$user","public"; -- Changeset dbchangelog.xml::myIDNumber1234::alex SET SEARCH_PATH TO public, "$user","public"; CREATE TABLE public.comments (comment_id INTEGER GENERATED BY DEFAULT AS IDENTITY NOT NULL, post_id INTEGER NOT NULL, author_id INTEGER NOT NULL, comment TEXT, commented_date TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW(), CONSTRAINT comments_pkey PRIMARY KEY (comment_id), CONSTRAINT fk_comments_author_id FOREIGN KEY (author_id) REFERENCES public.authors(author_id), CONSTRAINT fk_comments_post_id FOREIGN KEY (post_id) REFERENCES public.posts(post_id)); INSERT INTO public.databasechangelog (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, DESCRIPTION, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('myIDNumber1234', 'AlexL', 'dbchangelog.xml', NOW(), 1, '9:788a502d77d56330d53b6b356ee205ce', 'createTable tableName=comments', '', 'EXECUTED', NULL, NULL, '4.24.0', NULL); -- Release Database Lock SET SEARCH_PATH TO public, "$user","public"; UPDATE public.databasechangeloglock SET LOCKED = FALSE, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE ID = 1; SET SEARCH_PATH TO public, "$user","public"; Liquibase command 'updateSql' was executed successfully. ``` ### Run a diff command You can also run a `diff` command to compare your source and target databases. ```bash liquibase --referenceUrl=jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432/blog --referenceUsername alex --referencePassword IwMdnTs1R6kH diff ``` Details: Command output If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 12:34:20 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase Diff Results: Reference Database: alex @ jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432/blog (Default Schema: public) Comparison Database: alex @ jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog (Default Schema: public) Compared Schemas: public Product Name: EQUAL Product Version: EQUAL Missing Catalog(s): NONE Unexpected Catalog(s): NONE Changed Catalog(s): NONE Missing Column(s): NONE Unexpected Column(s): public.comments.author_id public.comments.comment public.comments.comment_id public.comments.commented_date public.comments.post_id Changed Column(s): NONE Missing Foreign Key(s): NONE Unexpected Foreign Key(s): fk_comments_author_id(comments[author_id] -> authors[author_id]) fk_comments_post_id(comments[post_id] -> posts[post_id]) Changed Foreign Key(s): NONE Missing Index(s): NONE Unexpected Index(s): comments_pkey UNIQUE ON public.comments(comment_id) Changed Index(s): NONE Missing Primary Key(s): NONE Unexpected Primary Key(s): comments_pkey on public.comments(comment_id) Changed Primary Key(s): NONE Missing Schema(s): NONE Unexpected Schema(s): NONE Changed Schema(s): NONE Missing Sequence(s): NONE Unexpected Sequence(s): NONE Changed Sequence(s): NONE Missing Table(s): NONE Unexpected Table(s): comments Changed Table(s): NONE Missing Unique Constraint(s): NONE Unexpected Unique Constraint(s): NONE Changed Unique Constraint(s): NONE Missing View(s): NONE Unexpected View(s): NONE Changed View(s): NONE Liquibase command 'diff' was executed successfully. ``` ### Save your changelog to source control When you are satisfied with the changes that will be applied, save your changelog to source control, such as a GitHub repository where you or your team stores you changelog. ### Apply the changeset to your source database Apply the new changesets to the source database on your default branch: ```bash liquibase --url=jdbc:postgresql://ep-silent-hill-85675036.us-east-2.aws.neon.tech:5432/blog update ``` Details: Command output If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 12:36:56 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase Running Changeset: dbchangelog.xml::myIDNumber1234::AlexL UPDATE SUMMARY Run: 1 Previously run: 0 Filtered out: 0 ------------------------------- Total change sets: 1 Liquibase: Update has been successful. Rows affected: 1 Liquibase command 'update' was executed successfully. ``` To ensure that all changes have been applied to the production database, you can rerun the `status`, `updatedSql`, and `diff` commands you ran above. After applying the change, there should be no differences. You can also check your databases in the **Tables** view in the Neon Console to verify that the source database now has a `comments` table. **Note**: When you run a changeset for the first time on the source database, you will find that Liquibase automatically creates the [databasechangelog](https://docs.liquibase.com/concepts/tracking-tables/databasechangelog-table.html) and [databasechangeloglock](https://docs.liquibase.com/concepts/tracking-tables/databasechangeloglock-table.html) tracking tables that were created in your development database. These tracking tables are created on any database where you apply changesets. ## References - [Get started with Liquibase](https://www.liquibase.org/get-started/quickstart) - [Setting up your Liquibase Workspace](https://www.liquibase.org/get-started/setting-up-your-workspace) - [Liquibase Developer Workflow](https://www.liquibase.org/get-started/developer-workflow) --- # Source: https://neon.com/llms/guides-liquibase.txt # Get started with Liquibase and Neon > This document guides Neon users through integrating Liquibase for database schema management, detailing setup, configuration, and execution of schema changes within the Neon environment. ## Source - [Get started with Liquibase and Neon HTML](https://neon.com/docs/guides/liquibase): The original HTML version of this documentation Liquibase is an open-source library for tracking, managing, and applying database schema changes. To learn more about Liquibase, refer to the [Liquibase documentation](https://docs.liquibase.com/home.html). This guide steps you through installing the Liquibase CLI, configuring Liquibase to connect to a Neon database, deploying a database schema change, and rolling back the schema change. The guide follows the setup described in the [Liquibase Get Started](https://www.liquibase.org/get-started/quickstart). ## Prerequisites - A Neon account. See [Sign up](https://neon.com/docs/get-started/signing-up). - A Neon project. See [Create your first project](https://neon.com/docs/get-started/setting-up-a-project). - Liquibase requires Java. For Liquibase Java requirements, see [Requirements](https://docs.liquibase.com/start/install/liquibase-requirements.html). To check if you have Java installed, run `java --version`, or `java -version` on macOS. ## Download and extract Liquibase 1. Download the Liquibase CLI from [https://www.liquibase.com/download](https://www.liquibase.com/download). 2. Extract the Liquibase files. For example: ```bash cd ~/Downloads mkdir ~/liquibase tar -xzvf liquibase-x.yy.z.tar.gz -C ~/liquibase/ ``` 3. Open a command prompt to view the contents of your Liquibase installation: ```bash cd ~/liquibase ls ABOUT.txt GETTING_STARTED.txt licenses liquibase.bat changelog.txt internal LICENSE.txt README.txt examples lib liquibase UNINSTALL.txt ``` ## Set your path variable Add the Liquibase directory to your `PATH` so that you can run Liquibase commands from any location. Tab: bashrc ```bash echo 'export PATH=$PATH:/path/to/liquibase' >> ~/.bashrc source ~/.bashrc ``` Tab: profile ```bash echo 'export PATH=$PATH:/path/to/liquibase' >> ~/.profile source ~/.profile ``` Tab: zsh ```bash echo 'export PATH=$PATH:/path/to/liquibase' >> ~/.zshrc source ~/.zshrc ``` ## Verify your installation Verify that the Liquibase installation was successful by running the following command: ```bash liquibase --version ... Liquibase Version: x.yy.z Liquibase Open Source x.yy.z by Liquibase ``` ## Prepare a Neon database For demonstration purposes, create a `blog` database in Neon with two tables, `posts` and `authors`. 1. Open the [Neon Console](https://console.neon.tech/app/projects). 1. Select your project. 1. Select **Databases** from the sidebar and create a database named `blog`. For instructions, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). 1. Using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), add the following tables: ```sql -- Creating the `authors` table CREATE TABLE authors ( author_id SERIAL PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100), email VARCHAR(255) UNIQUE NOT NULL, bio TEXT ); -- Creating the `posts` table CREATE TABLE posts ( post_id SERIAL PRIMARY KEY, author_id INTEGER REFERENCES authors(author_id), title VARCHAR(255) NOT NULL, content TEXT, published_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); ``` ## Retrieve your Neon database connection string Find your database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Use the selection drop-down menu. Your Java connection string should look something like the one shown below. ```bash jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech/blog?user=alex&password=AbC123dEf ``` ## Connect from Liquibase to your Neon database 1. Create a directory for your Liquibase project. For example: ```bash mkdir blogdb ``` 2. Change to your project directory and create a `liquibase.properties` file. ```bash cd blogdb touch liquibase.properties ``` 3. Open the `liquibase.properties` file in an editor and add entries for a [liquibase changelog file](https://docs.liquibase.com/concepts/changelogs/home.html) and your database `url`. We'll call the changelog file `dbchangelog.xml`. You will use this file to define schema changes. For the `url`, specify the Neon connection string you retrieved previously. ```bash changeLogFile:dbchangelog.xml url: jdbc:postgresql://ep-cool-darkness-123456.us-east-2.aws.neon.tech/blog?user=alex&password=AbC123dEf&sslmode=require&channel_binding=require ``` ## Take a snapshot of your database In this step, you will run the [generateChangelog](https://docs.liquibase.com/commands/inspection/generate-changelog.html) command in your project directory to create a changelog file with the current state of your database. We'll call this file `mydatabase_changelog.xml`. ```bash liquibase --changeLogFile=mydatabase_changelog.xml generateChangeLog ``` You'll get a changelog file for your database that looks something like this: ```xml ``` ## Create a schema change Now, you can start making database schema changes by creating [changesets](https://docs.liquibase.com/concepts/changelogs/changeset.html) and adding them to the database changelog file you defined in your `liquibase.properties` file. A changeset is the basic unit of change in Liquibase. 1. Create the changelog file where you will add your schema changes: ```bash cd ~/blogdb touch dbchangelog.xml ``` 2. Add the following changeset, which adds a `comments` table to your database. Replace `author="alex" id="myIDNumber1234"` with your auther name and id, which you can retrieve from your changelog file, described in the previous step. ```xml ``` ## Deploy your change Deploy your database schema change by running the [update](https://docs.liquibase.com/commands/update/update.html) command: ```bash liquibase update ``` Details: Command output If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 07:33:53 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase Running Changeset: dbchangelog.xml::myIDNumber1234::AlexL UPDATE SUMMARY Run: 1 Previously run: 0 Filtered out: 0 ------------------------------- Total change sets: 1 Liquibase: Update has been successful. Rows affected: 1 Liquibase command 'update' was executed successfully. ``` **Info**: When you run a changeset for the first time, Liquibase automatically creates two tracking tables in your database: - [databasechangelog](https://docs.liquibase.com/concepts/tracking-tables/databasechangelog-table.html): Tracks which changesets have been run. - [databasechangeloglock](https://docs.liquibase.com/concepts/tracking-tables/databasechangeloglock-table.html): Ensures only one instance of Liquibase runs at a time. You can verify these tables were created by viewing the `blog` database on the **Tables** page in the Neon Console. Select **Tables** from the sidebar. ## Rollback a change Try rolling back your last change by running the Liquibase [rollbackCount](https://docs.liquibase.com/commands/rollback/rollback-count.html) command: ```bash liquibase rollbackCount 1 ``` Details: Command output If the command was successful, you'll see output similar to the following: ```bash Starting Liquibase at 07:36:22 (version 4.24.0 #14062 built at 2023-09-28 12:18+0000) Liquibase Version: 4.24.0 Liquibase Open Source 4.24.0 by Liquibase Rolling Back Changeset: dbchangelog.xml::myIDNumber1234::AlexL Liquibase command 'rollbackCount' was executed successfully. ``` You can verify that creation of the `comments` table was rolled back viewing the `blog` database on the **Tables** page in the Neon Console. Select **Tables** from the sidebar. ## Next steps Learn how to use Liquibase with Neon's database branching feature to set up a developer workflow. See [Set up a developer workflow with Liquibase and Neon](https://neon.com/docs/guides/liquibase-workflow). ## References - [Get started with Liquibase](https://www.liquibase.org/get-started/quickstart) - [Setting up your Liquibase Workspace](https://www.liquibase.org/get-started/setting-up-your-workspace) - [Liquibase Developer Workflow](https://www.liquibase.org/get-started/developer-workflow) --- # Source: https://neon.com/llms/guides-logical-replication-airbyte-snowflake.txt # Replicate data to Snowflake with Airbyte > The document outlines the process for Neon users to replicate data from Neon to Snowflake using Airbyte, detailing the setup and configuration steps necessary for establishing a data pipeline between these platforms. ## Source - [Replicate data to Snowflake with Airbyte HTML](https://neon.com/docs/guides/logical-replication-airbyte-snowflake): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. In this guide, you will learn how to define your Neon Postgres database as a data source in Airbyte so that you can stream data to Snowflake. [Airbyte](https://airbyte.com/) is an open-source data integration platform that moves data from a source to a destination system. Airbyte offers a large library of connectors for various data sources and destinations. [Snowflake](https://www.snowflake.com/) is a cloud-based data warehousing and analytics platform designed to handle large volumes of data. Snowflake allows businesses to store, process, and analyze data from various sources. ## Prerequisites - A source [Neon project](https://neon.com/docs/manage/projects#create-a-project) with a database containing the data you want to replicate. If you're just testing this out and need some data to play with, you run the following statements from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - An [Airbyte account](https://airbyte.com/) - A [Snowflake account](https://www.snowflake.com/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Prepare your source Neon database This section describes how to prepare your source Neon database (the publisher) for replicating data. ### Enable logical replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ### Create a Postgres role for replication It's recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a replication slot Airbyte requires a dedicated replication slot. Only one source should be configured to use this replication slot. Airbyte uses the `pgoutput` plugin in Postgres for decoding WAL changes into a logical replication stream. To create a replication slot called `airbyte_slot` that uses the `pgoutput` plugin, run the following command on your database using your replication role: ```sql SELECT pg_create_logical_replication_slot('airbyte_slot', 'pgoutput'); ``` `airbyte_slot` is the name assigned to the replication slot. You will need to provide this name when you set up your Airbyte source. ### Create a publication Perform the following steps for each table you want to replicate data from: 1. Add the replication identity (the method of distinguishing between rows) for each table you want to replicate: ```sql ALTER TABLE REPLICA IDENTITY DEFAULT; ``` In rare cases, if your tables use data types that support [TOAST](https://www.postgresql.org/docs/current/storage-toast.html) or have very large field values, consider using `REPLICA IDENTITY FULL` instead: ```sql ALTER TABLE REPLICA IDENTITY FULL; ``` 2. Create the Postgres publication. Include all tables you want to replicate as part of the publication: ```sql CREATE PUBLICATION airbyte_publication FOR TABLE ; ``` The publication name is customizable. Refer to the [Postgres docs](https://www.postgresql.org/docs/current/logical-replication-publication.html) if you need to add or remove tables from your publication. **Note**: The Airbyte UI currently allows selecting any table for Change Data Capture (CDC). If a table is selected that is not part of the publication, it will not be replicated even though it is selected. If a table is part of the publication but does not have a replication identity, the replication identity will be created automatically on the first run if the Postgres role you use with Airbyte has the necessary permissions. ## Create a Postgres source in Airbyte 1. From your Airbyte Cloud account, select **Sources** from the left navigation bar, search for **Postgres**, and then create a new Postgres source. 2. Enter the connection details for your Neon database. You can find your database connection details by clicking the **Connect** button on your **Project Dashboard**. For example, given a connection string like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Enter the details in the Airbyte **Create a source** dialog as shown below. Your values will differ. - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech - **Port**: 5432 - **Database Name**: dbname - **Username**: replication_user - **Password**: AbC123dEf 3. Under **Optional fields**, list the schemas you want to sync. Schema names are case-sensitive, and multiple schemas may be specified. By default, `public` is the only selected schema. 4. Select an SSL mode. You will most frequently choose `require` or `verify-ca`. Both of these options always require encryption. The `verify-ca` mode requires a certificate. Refer to [Connect securely](https://neon.com/docs/connect/connect-securely) for information about the location of certificate files you can use with Neon. 5. Under **Advanced**: - Select **Read Changes using Write-Ahead Log (CDC)** from available replication methods. - In the **Replication Slot** field, enter the name of the replication slot you created previously: `airbyte_slot`. - In the **Publication** field, enter the name of the publication you created previously: `airbyte_publication`. ### Allow inbound traffic If you are on Airbyte Cloud, and you are using Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, you will need to allow inbound traffic from Airbyte's IP addresses. You can find a list of IPs that need to be allowlisted in the [Airbyte Security docs](https://docs.airbyte.com/operating-airbyte/security). For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ### Complete the source setup To complete your source setup, click **Set up source** in the Airbyte UI. Airbyte will test the connection to your database. Once this succeeds, you've successfully configured an Airbyte Postgres source for your Neon database. ## Configure Snowflake as a destination To complete your data integration setup, you can now add Snowflake as your destination. ### Prerequisites - A Snowflake account with the `ACCOUNTADMIN` role. If you're using a company account, you may need to contact your Snowflake administrator to set one up for you. ### Set up Airbyte entities in Snowflake To set up the Snowflake destination connector, you first need to create Airbyte entities in Snowflake (a warehouse, database, schema, user, and role) with the `OWNERSHIP` permission to write data to Snowflake. You can use the following script in a new [Snowflake worksheet](https://docs.snowflake.com/en/user-guide/ui-worksheet) to create the entities. This script is provided as part of [Airbyte's Snowflake connector setup guide](https://docs.airbyte.com/integrations/destinations/snowflake#setup-guide). **Note**: If you want, you can edit the script to change the password to a more secure password and to change the names of other resources. If you do rename entities, make sure to follow [Sbowflake identifier requirements](https://docs.snowflake.com/en/sql-reference/identifiers-syntax). ```sql -- set variables (these need to be uppercase) set airbyte_role = 'AIRBYTE_ROLE'; set airbyte_username = 'AIRBYTE_USER'; set airbyte_warehouse = 'AIRBYTE_WAREHOUSE'; set airbyte_database = 'AIRBYTE_DATABASE'; set airbyte_schema = 'AIRBYTE_SCHEMA'; -- set user password set airbyte_password = 'password'; begin; -- create Airbyte role use role securityadmin; create role if not exists identifier($airbyte_role); grant role identifier($airbyte_role) to role SYSADMIN; -- create Airbyte user create user if not exists identifier($airbyte_username) password = $airbyte_password default_role = $airbyte_role default_warehouse = $airbyte_warehouse; grant role identifier($airbyte_role) to user identifier($airbyte_username); -- change role to sysadmin for warehouse / database steps use role sysadmin; -- create Airbyte warehouse create warehouse if not exists identifier($airbyte_warehouse) warehouse_size = xsmall warehouse_type = standard auto_suspend = 60 auto_resume = true initially_suspended = true; -- create Airbyte database create database if not exists identifier($airbyte_database); -- grant Airbyte warehouse access grant USAGE on warehouse identifier($airbyte_warehouse) to role identifier($airbyte_role); -- grant Airbyte database access grant OWNERSHIP on database identifier($airbyte_database) to role identifier($airbyte_role); commit; begin; USE DATABASE identifier($airbyte_database); -- create schema for Airbyte data CREATE SCHEMA IF NOT EXISTS identifier($airbyte_schema); commit; begin; -- grant Airbyte schema access grant OWNERSHIP on schema identifier($airbyte_schema) to role identifier($airbyte_role); commit; ``` ### Set up Snowflake as a destination To set up a new destination: 1. Navigate to Airbyte. 2. Select **New destination**. 3. Select the Snowflake connector. 4. Create the destination by filling in the required fields. You can authenticate using username/password or key pair authentication. We'll authenticate via username/password. | Field | Description | Example | | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | | **Host** | The host domain of the Snowflake instance (must include the account, region, cloud environment, and end with `snowflakecomputing.com`). | `.us-east-2.aws.snowflakecomputing.com` | | **Role** | The role you created for Airbyte to access Snowflake. | `AIRBYTE_ROLE` | | **Warehouse** | The warehouse you created for Airbyte to sync data into. | `AIRBYTE_WAREHOUSE` | | **Database** | The database you created for Airbyte to sync data into. | `AIRBYTE_DATABASE` | | **Schema** | The default schema used as the target schema for all statements issued from the connection that do not explicitly specify a schema name. | - | | **Username** | The username you created to allow Airbyte to access the database. | `AIRBYTE_USER` | | **Password** | The password associated with the username. | - | When you're finished filling in the required fields, click **Set up destination**. ## Set up a connection In this step, you'll set up a connection between your Neon Postgres source and your Snowflake destination. To set up a new destination: 1. Navigate to Airbyte. 2. Select **New connection**. 3. Select the existing Postgres source you created earlier. 4. Select the existing Snowflake destination you created earlier. 5. Select **Replicate source** as the sync mode. 6. Click **Next**. 7. On the **Configure connection** dialog, you can accept the defaults or modify the settings according to your requirements. 8. Click **Finish & sync** to complete the setup. Your first sync may take a few moments. ## Verify the replication After the sync operation is complete, you can verify the replication by navigating to Snowflake, opening your Snowflake project, navigating to a worksheet, and querying your database to view the replicated data. For example, if you've replicated the `playing_with_neon` example table, you can run a `SELECT * FROM PLAYING_WITH_NEON;` query to view the replicated data. ## References - [Setting up the Airbyte destination connector](https://docs.airbyte.com/integrations/destinations/snowflake) - [Airbyte: Add a destination](https://docs.airbyte.com/using-airbyte/getting-started/add-a-destination) - [Airbyte: Set up a connection](https://docs.airbyte.com/using-airbyte/getting-started/set-up-a-connection) - [Airbyte: How to load data from Postgres to Snowflake destination](https://airbyte.com/how-to-sync/postgresql-to-snowflake-data-cloud) - [What is an ELT data pipeline?](https://airbyte.com/blog/elt-pipeline) - [Logical replication - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html) - [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html) --- # Source: https://neon.com/llms/guides-logical-replication-airbyte.txt # Replicate data with Airbyte > The document guides Neon users on setting up data replication from Neon to other destinations using Airbyte, detailing the configuration of sources, destinations, and connections for seamless data transfer. ## Source - [Replicate data with Airbyte HTML](https://neon.com/docs/guides/logical-replication-airbyte): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Airbyte](https://airbyte.com/) is an open-source data integration platform that moves data from a source to a destination system. Airbyte offers a large library of connectors for various data sources and destinations. In this guide, you will learn how to define your Neon Postgres database as a data source in Airbyte so that you can stream data to one or more of Airbyte's supported destinations. ## Prerequisites - An [Airbyte account](https://airbyte.com/) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Prepare your source Neon database This section describes how to prepare your source Neon database (the publisher) for replicating data to your destination Neon database (the subscriber). ### Enable logical replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ### Create a Postgres role for replication It's recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a replication slot Airbyte requires a dedicated replication slot. Only one source should be configured to use this replication slot. Airbyte uses the `pgoutput` plugin in Postgres for decoding WAL changes into a logical replication stream. To create a replication slot called `airbyte_slot` that uses the `pgoutput` plugin, run the following command on your database using your replication role: ```sql SELECT pg_create_logical_replication_slot('airbyte_slot', 'pgoutput'); ``` `airbyte_slot` is the name assigned to the replication slot. You will need to provide this name when you set up your Airbyte source. ### Create a publication Perform the following steps for each table you want to replicate data from: 1. Add the replication identity (the method of distinguishing between rows) for each table you want to replicate: ```sql ALTER TABLE REPLICA IDENTITY DEFAULT; ``` In rare cases, if your tables use data types that support [TOAST](https://www.postgresql.org/docs/current/storage-toast.html) or have very large field values, consider using `REPLICA IDENTITY FULL` instead: ```sql ALTER TABLE REPLICA IDENTITY FULL; ``` 2. Create the Postgres publication. Include all tables you want to replicate as part of the publication: ```sql CREATE PUBLICATION airbyte_publication FOR TABLE ; ``` The publication name is customizable. Refer to the [Postgres docs](https://www.postgresql.org/docs/current/logical-replication-publication.html) if you need to add or remove tables from your publication. **Note**: The Airbyte UI currently allows selecting any table for Change Data Capture (CDC). If a table is selected that is not part of the publication, it will not be replicated even though it is selected. If a table is part of the publication but does not have a replication identity, the replication identity will be created automatically on the first run if the Postgres role you use with Airbyte has the necessary permissions. ## Create a Postgres source in Airbyte 1. From your Airbyte Cloud account, select **Sources** from the left navigation bar, search for **Postgres**, and then create a new Postgres source. 2. Enter the connection details for your Neon database. You can find your database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). For example, given a connection string like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Enter the details in the Airbyte **Create a source** dialog as shown below. Your values will differ. - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech - **Port**: 5432 - **Database Name**: dbname - **Username**: replication_user - **Password**: AbC123dEf 3. Under **Optional fields**, list the schemas you want to sync. Schema names are case-sensitive, and multiple schemas may be specified. By default, `public` is the only selected schema. 4. Select an SSL mode. You will most frequently choose `require` or `verify-ca`. Both of these options always require encryption. The `verify-ca` mode requires a certificate. Refer to [Connect securely](https://neon.com/docs/connect/connect-securely) for information about the location of certificate files you can use with Neon. 5. Under **Advanced**: - Select **Read Changes using Write-Ahead Log (CDC)** from available replication methods. - In the **Replication Slot** field, enter the name of the replication slot you created previously: `airbyte_slot`. - In the **Publication** field, enter the name of the publication you created previously: `airbyte_publication`. ## Allow inbound traffic If you are on Airbyte Cloud, and you are using Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, you will need to allow inbound traffic from Airbyte's IP addresses. You can find a list of IPs that need to be allowlisted in the [Airbyte Security docs](https://docs.airbyte.com/operating-airbyte/security). For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## Complete the source setup To complete your source setup, click **Set up source** in the Airbyte UI. Airbyte will test the connection to your database. Once this succeeds, you've successfully configured an Airbyte Postgres source for your Neon database. ## Configure a destination To complete your data integration setup, you can now add one of Airbyte's many supported destinations, such as [Snowflake](https://neon.com/docs/guides/logical-replication-airbyte-snowflake), BigQuery, or Kafka, to name a few. After configuring a destination, you'll need to set up a connection between your Neon source database and your chosen destination. Refer to the Airbyte documentation for instructions: - [Add a destination](https://docs.airbyte.com/using-airbyte/getting-started/add-a-destination) - [Set up a connection](https://docs.airbyte.com/using-airbyte/getting-started/set-up-a-connection) ## References - [What is an ELT data pipeline?](https://airbyte.com/blog/elt-pipeline) - [Logical replication - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html) - [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html) --- # Source: https://neon.com/llms/guides-logical-replication-alloydb.txt # Replicate data from AlloyDB > The document outlines the process for setting up logical replication from AlloyDB to Neon, detailing the necessary configurations and steps to synchronize data between the two databases. ## Source - [Replicate data from AlloyDB HTML](https://neon.com/docs/guides/logical-replication-alloydb): The original HTML version of this documentation This guide describes how to replicate data from AlloyDB Postgres to Neon using native Postgres logical replication. The steps in this guide follow those described in [Set up native PostgreSQL logical replication](https://cloud.google.com/sql/docs/postgres/replication/configure-logical-replication#set-up-native-postgresql-logical-replication), in the _Google AlloyDB documentation_. ## Prerequisites - An AlloyDB Postgres instance containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data. ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A Neon project with a Postgres database to receive the replicated data. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your AlloyDB source database This section describes how to prepare your source AlloyDB Postgres instance (the publisher) for replicating data to Neon. ### Enable logical replication Your first step is to enable logical replication at the source Postgres instance. In AlloyDB, you can enable logical replication by setting the `alloydb.enable_pglogical` and `alloydb.logical_decoding` flags to `on`. This sets the Postgres `wal_level` parameter to `logical`. To enable these flags: 1. In the Google Cloud console, navigate to your [AlloyDB Clusters](https://console.cloud.google.com/alloydb/clusters) page. 2. From the **Actions** menu for your Primary instance, select **Edit**. 3. Scroll down to the **Advanced Configurations Options** > **Flags** section. 4. If the flags have not been set on the instance before, click **Add a Database Flag**, and set the value to `on` for the `alloydb.enable_pglogical` and `alloydb.logical_decoding`. 5. Click **Update instance** to save your changes and confirm your selections. Afterward, you can verify that logical replication is enabled by running `SHOW wal_level;` from **AlloyDB Studio** or your terminal: ### Allow connections from Neon You need to allow connections to your AlloyDB Postgres instance from Neon. To do this in your AlloyDB instance: 1. In the Google Cloud console, navigate to your [AlloyDB Clusters](https://console.cloud.google.com/alloydb/clusters) page and select your **Primary instance** to open the **Overview** page. 2. Scroll down to the **Instances in your cluster** section. 3. Click **Edit Primary**. 4. Select the **Enable public IP** checkbox to allow connections over the public internet. 5. Under **Authorized external networks**, enter the Neon IP addresses you want to allow. Add an entry for each of the NAT gateway IP addresses associated with your Neon project's region. Neon has 3 to 6 IP addresses per region, corresponding to each availability zone. See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for the IP addresses. **Note**: AlloyDB requires addresses to be specified in CIDR notation. You can do so by appending `/32` to the NAT Gateway IP address; for example: `18.217.181.229/32` In the example shown below, you can see that three addresses were added in CIDR format by appending `/32`. 6. Under **Network Security**, select **Require SSL Encryption (default)** if it's not already selected. 7. Click **Update Instance** when you are finished. ### Note your public IP address Record the public IP address of your AlloyDB Postgres instance. You'll need this value later when you set up a subscription from your Neon database. You can find the public IP address on your AlloyDB instance's **Overview** page, under **Instances in your cluster** > **Connectivity**. **Note**: If you do not use a public IP address, you'll need to configure access via a private IP. See [Private IP overview](https://cloud.google.com/alloydb/docs/private-ip), in the AlloyDB documentation. ### Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data from your AlloyDB Postgres instance. The role must have the `REPLICATION` privilege. On your AlloyDB Postgres instance, login in as your `postgres` user or an administrative user you use to create roles and run the following command to create a replication role. You can replace the name `replication_user` with whatever name you want to use. ```sql CREATE USER replication_user WITH REPLICATION IN ROLE alloydbsuperuser LOGIN PASSWORD 'replication_user_password'; ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to a Postgres role named `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your Neon destination database This section describes how to prepare your source Neon Postgres database (the subscriber) to receive replicated data from your AlloyDB Postgres instance. ### Prepare your database schema When configuring logical replication in Postgres, the tables defined in your publication on the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. **Note**: If you're just using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` #### Dump the schema To dump only the schema from a database, you can run a `pg_dump` command similar to the following to create an `.sql` dump file with the schema only: ```sql pg_dump --schema-only \ --no-privileges \ "postgresql://role:password@hostname:5432/dbname" \ > schema_dump.sql ``` - With the `--schema-only` option, only object definitions are dumped. Data is excluded. - The `--no-privileges` option prevents dumping privileges. Neon may not support the privileges you've defined elsewhere, or if dumping a schema from Neon, there maybe Neon-specific privileges that cannot be restored to another database. #### Review and modify the dumped schema After dumping a schema to an `.sql` file, review it for statements that you don't want to replicate or that won't be supported on your destination database, and comment them out. For example, when dumping a schema from AlloyDB, you'll see the statements shown below, which you'll need to comment out because they won't be supported in Neon. Generally, you should remove any parameters configured on another Postgres provider and rely on Neon's default Postgres settings. If you are replicating a large dataset, also consider removing any `CREATE INDEX` statements from the resulting dump file to avoid creating indexes when loading the schema on the destination database (the subscriber). Taking indexes out of the equation can substantially reduce the time required for initial data load performed when starting logical replication. Save the `CREATE INDEX` statements that you remove. You can add the indexes back after the initial data copy is completed. **Note**: To comment out a single line, you can use `--` at the beginning of the line. ```sql -- SET statement_timeout = 0; -- SET lock_timeout = 0; -- SET idle_in_transaction_session_timeout = 0; -- SET client_encoding = 'UTF8'; -- SET standard_conforming_strings = on; -- SELECT pg_catalog.set_config('search_path', '', false); -- SET check_function_bodies = false; -- SET xmloption = content; -- SET client_min_messages = warning; -- SET row_security = off; -- ALTER SCHEMA public OWNER TO alloydbsuperuser; -- CREATE EXTENSION IF NOT EXISTS google_columnar_engine WITH SCHEMA public; -- CREATE EXTENSION IF NOT EXISTS google_db_advisor WITH SCHEMA public; ``` #### Load the schema After making any necessary modifications to the dump file, load the dumped schema using `pg_restore`. **Tip**: When you're restoring on Neon, you can input your Neon connection string in place of `postgresql://role:password@hostname:5432/dbname`. You can find your database connection string by clicking the **Connect** button on your **Project Dashboard**. ```sql psql \ "postgresql://role:password@hostname:5432/dbname" \ < schema_dump.sql ``` After you've loaded the schema, you can view the result with this `psql` command: ```sql \dt ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on your Neon destination database. 1. Create the subscription using the using a `CREATE SUBSCRIPTION` statement: ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'host= port=5432 dbname=postgres user=replication_user password=replication_user_password' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source AlloyDB database where you defined the publication. For the ``, use the IP address of your AlloyDB Postgres instance that you noted earlier, and specify the name and password of your replication role. If you're replicating from a database other than `postgres`, be sure to specify that database name. - `publication_name`: The name of the publication you created on the source Neon database. 2. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; ``` The subscription (`my_subscription`) should be listed, confirming that your subscription has been created successfully. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert 10 rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 30 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete, you can switch your application over to the destination database by swapping out your AlloyDB source database connection details for your Neon destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For details, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-cloud-sql.txt # Replicate data from Cloud SQL Postgres > This document guides Neon users on setting up logical replication to replicate data from Cloud SQL Postgres to Neon, detailing the necessary configurations and steps involved in the process. ## Source - [Replicate data from Cloud SQL Postgres HTML](https://neon.com/docs/guides/logical-replication-cloud-sql): The original HTML version of this documentation This guide describes how to replicate data from Cloud SQL Postgres using native Postgres logical replication, as described in [Set up native PostgreSQL logical replication](https://cloud.google.com/sql/docs/postgres/replication/configure-logical-replication#set-up-native-postgresql-logical-replication), in the Google Cloud SQL documentation. ## Prerequisites - A Cloud SQL Postgres instance containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data. Your database and schema may differ. ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A Neon project with a Postgres database to receive the replicated data. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your Cloud SQL source database This section describes how to prepare your source Cloud SQL Postgres instance (the publisher) for replicating data to Neon. ### Enable logical replication The first step is to enable logical replication at the source Postgres instance. In Cloud SQL, you can enable logical replication for your Postgres instance by setting the `cloudsql.logical_decoding` flag to `on`. This action will set the Postgres `wal_level` parameter to `logical`. To enable this flag: 1. In the Google Cloud console, select the project that contains the Cloud SQL instance for which you want to set a database flag. 2. Open the instance and click **Edit**. 3. Scroll down to the **Flags** section. 4. If this flag has not been set on the instance before, click **Add item**, choose the flag from the drop-down menu, and set its value to `On`. 5. Click **Save** to save your changes. 6. Confirm your changes under **Flags** on the **Overview** page. The change requires restarting the instance: Afterward, you can verify that logical replication is enabled by running `SHOW wal_level;` from **Cloud SQL Studio** or your terminal. ### Allow connections from Neon You need to allow connections to your Cloud SQL Postgres instance from Neon. To do this in Google Cloud: 1. In the Google Cloud console, go to the Cloud SQL Instances page. 1. Open the **Overview** page of your instance by clicking the instance name. 1. From the SQL navigation menu, select **Connections**. 1. Click the **Networking** tab. 1. Select the **Public IP** checkbox. 1. Click **Add network**. 1. Optionally, in the **Name** field, enter a name for this network. 1. In the **Network** field, enter the IP address from which you want to allow connections. You will need to perform this step for each of the NAT gateway IP addresses associated with your Neon project's region. Neon uses 3 to 6 IP addresses per region for this outbound communication, corresponding to each availability zone in the region. See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for Neon's NAT gateway IP addresses. **Note**: Cloud SQL requires addresses to be specified in CIDR notation. You can do so by appending `/32` to the NAT Gateway IP address; for example: `18.217.181.229/32` In the example shown below, you can see that three addresses were added, named `Neon1`, `Neon2`, and `Neon3`. You can name them whatever you like. The addresses were added in CIDR format by adding `/32`. 1. Click **Done** after adding a Network entry. 1. Click **Save** when you are finished adding Network entries for all of your Neon project's NAT Gateway IP addresses. **Note**: You can specify a single Network entry using `0.0.0.0/0` to allow traffic from any IP address. However, this configuration is not considered secure and will trigger a warning. ### Note your public IP address Record the public IP address of your Cloud SQL Postgres instance. You'll need this value later when you set up a subscription from your Neon database. You can find the public IP address on your Cloud SQL instance's **Overview** page. **Note**: If you do not use a public IP address, you'll need to configure access via a private IP. Refer to the [Cloud SQL documentation](https://cloud.google.com/sql/docs/mysql/private-ip). ### Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data from your Cloud SQL Postgres instance. The role must have the `REPLICATION` privilege. On your Cloud SQL Postgres instance, login in as your `postgres` user or an administrative user you use to create roles and run the following command to create a replication role. You can replace the name `replication_user` with whatever role name you want to use. ```sql CREATE USER replication_user WITH REPLICATION IN ROLE cloudsqlsuperuser LOGIN PASSWORD 'replication_user_password'; ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to a Postgres role named `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your Neon destination database This section describes how to prepare your source Neon Postgres database (the subscriber) to receive replicated data from your Cloud SQL Postgres instance. ### Prepare your database schema When configuring logical replication in Postgres, the tables in the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on your Neon destination database. 1. Create the subscription using the using a `CREATE SUBSCRIPTION` statement. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'host= port=5432 dbname=postgres user=replication_user password=replication_user_password' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source Cloud SQL database where you defined the publication. For the ``, use the IP address of your Cloud SQL Postgres instance that you noted earlier, and specify the name and password of your replication role. If you're replicating from a database other than `postgres`, be sure to specify that database name. - `publication_name`: The name of the publication you created on the source Neon database. 2. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; ``` The subscription (`my_subscription`) should be listed, confirming that your subscription has been created successfully. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert some rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 30 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete, you can switch your application over to the destination database by swapping out your Cloud SQL source database connection details for your Neon destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For details, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-concepts.txt # Postgres logical replication concepts > The document outlines the concepts of Postgres logical replication, detailing its architecture, components, and processes, specifically for Neon users to understand and implement logical replication within their database systems. ## Source - [Postgres logical replication concepts HTML](https://neon.com/docs/guides/logical-replication-concepts): The original HTML version of this documentation Logical Replication is a method of replicating data between databases or between your database and other data services or platforms. It differs from physical replication in that it replicates transactional changes rather than copying the entire database byte-for-byte. This approach allows for selective replication, where users can choose specific tables or rows for replication. It works by capturing DML operations in the source database and applying these changes to the target, which could be another Postgres database or data platform. With logical replication, you can copy some or all of your data to a different location and continue sending updates from your source database in real-time, allowing you to maintain up-to-date copies of your data in different locations. **Note**: For step-by-step setup instructions, refer to our [logical replication guides](https://neon.com/docs/guides/logical-replication-guide). ## Publisher subscriber model The Postgres logical replication architecture is very simple. It uses a _publisher and subscriber_ model for data replication. The primary data source is the _publisher_, and the database or platform receiving the data is the _subscriber_. On the initial connection from a subscriber, all the data is copied from the publisher to the subscriber. After the initial copy operation, any changes made on the publisher are sent to the subscriber. You can read more about this model in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html). ## Enabling logical replication In Neon, you can enable logical replication from the Neon Console. This only necessary if your Neon Postgres instance is acting as a publisher, replicating data to another Postgres instance, data service, or platform. To enable logical replication: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Replication**. 4. Click **Enable**. You can verify that logical replication is enabled by running the following query: ```sql SHOW wal_level; wal_level ----------- logical ``` Enabling logical replication turns on detailed logging, which is required to support the replication process. ## Publications The Postgres documentation describes a [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) as a group of tables whose data changes are intended to be replicated through logical replication. It also describes a publication as a set of changes generated from a table or a group of tables. It's indeed both of these things. A particular table can be included in multiple publications if necessary. Currently, publications can only include tables within a single schema. This is a Postgres limitation. Publications can specify the types of changes they replicate, which can include `INSERT`, `UPDATE`, `DELETE`, and `TRUNCATE` operations. By default, publications replicate all of these operation types. You can create a publication for one or more tables on the "publisher" database using [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html) syntax. For example, this command creates a publication named `users_publication` that tracks changes made to a `users` table. ```sql CREATE PUBLICATION users_publication FOR TABLE users; ``` ## Subscriptions A subscription represents the downstream side of logical replication. Data is replicated _to_ a subscriber. A subscription- establishes a connection to the publisher and identifies the publication it intends to subscribe to. A single subscriber can maintain multiple subscriptions, including multiple subscriptions to the same publisher. You can create a subscription on a "susbcriber" database or platform using [CREATE SUBSCRIPTION](https://www.postgresql.org/docs/current/sql-createsubscription.html) syntax. Building on the `users_publication` example above, here's how you would create a subscription: ```sql CREATE SUBSCRIPTION users_subscription CONNECTION 'postgresql://username:password@host:port/dbname' PUBLICATION users_publication; ``` A subscription requires a unique name, a database connection string, the name and password of your replication role, and the name of the publication it subscribes to. ## How does it work under the hood? While the publisher and subscriber model forms the surface of Postgres logical replication, the underlying mechanism is driven by a few key components, described below. ### Write-Ahead Log (WAL) The WAL is central to Postgres's data durability and crash recovery mechanisms. In the context of logical replication, the WAL records all changes to your data. For logical replication, the WAL serves as the primary source of data that needs to be replicated. It's the transaction data captured in the WAL that's processed and then relayed from a publisher to a subscriber. ### Replication slots Replication slots on the publisher database track replication progress, ensuring that no data in the WAL is purged before the subscriber has successfully replicated it. This mechanism helps maintain data consistency and prevent data loss in cases of network interruption or subscriber downtime. Replication slots are typically created automatically with new subscriptions, but they can be created manually using the `pg_create_logical_replication_slot` function. Some "subscriber" data services and platforms require that you create a dedicated replication slot. This is accomplished using the following syntax: ```sql SELECT pg_create_logical_replication_slot('my_replication_slot', 'pgoutput'); ``` The first value, `my_replication_slot` is the name given to the replication slot. The second value is the decoder plugin the slot should use. Decoder plugins are discussed below. The `max_replication_slots` configuration parameter defines the maximum number of replication slots that can be used to manage database replication connections. Each replication slot tracks changes in the publisher database to ensure that the connected subscriber stays up to date. You'll want a replication slot for each replication connection. For example, if you expect to have 10 separate subscribers replicating from your database, you would set `max_replication_slots` to 10 to accommodate each connection. The `max_replication_slots` configuration parameter on Neon is set to `10` by default. ```ini max_replication_slots = 10 ``` **Important**: To prevent storage bloat, **Neon automatically removes _inactive_ replication slots after a period of time if there are other _active_ replication slots**. If you have or intend on having more than one replication slot, please see [Unused replication slots](https://neon.com/docs/guides/logical-replication-neon#unused-replication-slots) to learn more. ### Decoder plugins The Postgres replication architecture uses decoder plugins to decode WAL entries into a logical replication stream, making the data understandable for the subscriber. The default decoder plugin for PostgreSQL logical replication is `pgoutput`, and it's included in Postgres by default. You don't need to install it. Neon, supports an alternative decoder plugin called `wal2json`. This decoder plugin differs from `pgoutput` in that it converts WAL data into `JSON` format, which is useful for integrating Postgres with systems and applications that work with `JSON` data. To use this decoder plugin, you'll need to create a dedicated replication slot for it, as shown here: ```sql SELECT pg_create_logical_replication_slot('my_replication_slot', 'wal2json'); ``` For for more information about this alternative decoder plugin and how top use it, see [wal2json](https://github.com/eulerto/wal2json). ### WAL senders WAL senders are processes on the publisher database that read the WAL and send the relevant data to the subscriber. The `max_wal_senders` parameter defines the maximum number of concurrent WAL sender processes that are responsible for streaming WAL data to subscribers. In most cases, you should have one WAL sender process for each subscriber or replication slot to ensure efficient and consistent data replication. The `max_wal_senders` configuration parameter on Neon is set to `10` by default, which matches the maximum number of replication slots defined by the `max_replication_slots` setting. ```ini max_wal_senders = 10 ``` ### WAL receivers On the subscriber side, WAL receivers receive the replication stream (the decoded WAL data), and apply these changes to the subscriber. The number of WAL receivers is determined by the number of connections made by subscribers. ## References - [Logical replication - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html) - [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html) - [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html) - [CREATE SUBSCRIPTION](https://www.postgresql.org/docs/current/sql-createsubscription.html) - [wal2json](https://github.com/eulerto/wal2json) --- # Source: https://neon.com/llms/guides-logical-replication-decodable.txt # Replicate data with Decodable > The document outlines the process for using Decodable to replicate data from Neon databases, detailing the steps required to set up logical replication and integrate with Decodable's streaming platform. ## Source - [Replicate data with Decodable HTML](https://neon.com/docs/guides/logical-replication-decodable): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Decodable](https://www.decodable.co/) is a fully managed platform for ETL, ELT, and stream processing, powered by Apache Flink® and Debezium. In this guide, you will learn how to configure a Postgres source connector in Decodable for ingesting changes from your Neon database so that you can replicate data from Neon to any of Decodable's [supported data sinks](https://docs.decodable.co/connections.html#sinks), optionally processing the data with SQL or custom Flink jobs. ## Prerequisites - A [Decodable account](https://www.decodable.co/) ([start free](https://app.decodable.co/-/accounts/create), no credit card required) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ## Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ## Create a publication For each table you would like to ingest into Decodable, set its [replica identity](https://www.postgresql.org/docs/current/logical-replication-publication.html) to `FULL`. To do so, issue the following statement in the **Neon SQL Editor**: ```sql ALTER TABLE REPLICA IDENTITY FULL; ``` Next, create a [publication](https://www.postgresql.org/docs/current/sql-createpublication.html) with the name `dbz_publication`. Include all the tables you would like to ingest into Decodable. ```sql CREATE PUBLICATION dbz_publication FOR TABLE >; ``` Refer to the [Postgres docs](https://www.postgresql.org/docs/current/sql-alterpublication.html) if you need to add or remove tables from your publication. Upon start-up, the Decodable connector for Postgres will automatically create the [replication slot](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS) required for ingesting data change events from Postgres. The slot's name will be prefixed with `decodable_`, followed by a unique identifier. ## Allow inbound traffic If you are using Neon's **IP Allow** feature to limit the IP addresses that can connect to Neon, you will need to allow inbound traffic from Decodable's IP addresses. Refer to the [Decodable documentation](https://docs.decodable.co/reference/regions-and-ip-addresses.html#ip-addresses) for the list of IPs that need to be allowlisted for the Decodable region of your account. For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## Create a Postgres source connector in Decodable 1. In the Decodable web UI, select **Connections** from the left navigation bar and click **New Connection**. 2. In the connector catalog, choose **Postgres CDC** and click **Connect**. 3. Enter the connection details for your Neon database. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Your connection string will look like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Enter the details for **your connection string** into the source connector fields. Based on the sample connection string above, the values would be specified as shown below. Your values will differ. - **Connection Type**: Source (the default) - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech - **Port**: 5432 - **Database**: dbname - **Username**: alex - **Password**: Click **Add a new secret...**, then specify a name for that secret and `AbC123dEf` as its value - **Decoding Plugin Name**: pgoutput (the default) 4. Click **Next**. Decodable will now scan the source database for all the tables that can be replicated. Select one or more table(s) by checking the **Sync** box next to their name. Optionally, you can change the name of the destination stream for each table, which by default will be in the form of `____`. You can also take a look a the schema of each stream by clicking **View Schema**. 5. Click **Next** and specify a name for your connection, for instance: `neon-source`. 6. Click **Create and start**. The default start options in the following dialog don't require any changes, so click **Start** to launch the connector. ## Previewing the data Once the connector is in **Running** state, navigate to the connected Decodable stream, via **Outbound to...** on the connector's overview tab. By clicking **Run Preview**, you can examine the change events ingested by the connector. ## Next steps At this point, you have a running connector, which continuously ingests changes from a Neon database into Decodable with low latency. Next, you could set up one of the supported Decodable **sink connectors** which will propagate the data to a wide range of data stores and systems, such as Snowflake, Elasticsearch, Apache Kafka, Apache Iceberg, Amazon S3, any many more. If needed, you also can add a **processing step**, either using SQL or by deploying your own Apache Flink job, for instance, to filter and transform the data before propagating it to an external system. Of course, you also can take your processed data back to another Neon database, using the Decodable sink connector for Postgres. ## References - [Decodable: The Pragmatic Approach to Data Movement](https://www.decodable.co/blog/pragmatic-approach-to-data-movement) - [Getting Started With Decodable](https://docs.decodable.co/welcome.html) - [Connecting Decodable to Sources and Destinations](https://docs.decodable.co/connections.html) - [About Decodable Pipelines](https://docs.decodable.co/pipelines.html) - [Postgres Documentation: Logical Replication](https://www.postgresql.org/docs/current/logical-replication.html) --- # Source: https://neon.com/llms/guides-logical-replication-estuary-flow.txt # Replicate Data with Estuary Flow > The document guides Neon users on setting up data replication using Estuary Flow, detailing the steps to configure logical replication from a Neon database to other data systems. ## Source - [Replicate Data with Estuary Flow HTML](https://neon.com/docs/guides/logical-replication-estuary-flow): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Estuary Flow](https://estuary.dev/) is a real-time data streaming platform that allows you to connect, transform, and move data from various sources to destinations with sub-100ms latency. In this guide, you will learn how to configure a Postgres source connector in Estuary Flow for ingesting changes from your Neon database, enabling you to replicate data from Neon to any of Estuary Flow's [supported destinations](https://docs.estuary.dev/reference/Connectors/materialization-connectors/#available-materialization-connectors), with optional transformations along the way. ## Prerequisites - An [Estuary Flow account](https://dashboard.estuary.dev/register) (start free, no credit card required) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable Logical Replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://docs.neon.tech/docs/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Create a Postgres Role for Replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon Console, CLI, or API are granted membership in the [neon_superuser](https://docs.neon.tech/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name cdc_role ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name (e.g., `cdc_role`). 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "cdc_role" } }' | jq ``` ## Grant Schema Access to Your Postgres Role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. Run these commands for each schema: ```sql GRANT USAGE ON SCHEMA public TO cdc_role; GRANT SELECT ON ALL TABLES IN SCHEMA public TO cdc_role; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO cdc_role; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ## Create a Publication Create a [publication](https://www.postgresql.org/docs/current/sql-createpublication.html) with the name `estuary_publication`. Include all the tables you would like to ingest into Estuary Flow. ```sql CREATE PUBLICATION estuary_publication FOR TABLE ; ``` Refer to the [Postgres docs](https://www.postgresql.org/docs/current/sql-alterpublication.html) if you need to add or remove tables from your publication. Upon startup, the Estuary Flow connector for Postgres will automatically create the [replication slot](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS) required for ingesting data change events from Postgres. The slot's name will be prefixed with `estuary_`, followed by a unique identifier. ## Allow Inbound Traffic If you are using Neon's **IP Allow** feature to limit the IP addresses that can connect to Neon, you will need to allow inbound traffic from Estuary Flow's IP addresses. Refer to the [Estuary Flow documentation](https://docs.estuary.dev/reference/allow-ip-addresses/#ip-addresses-to-allowlist) for the list of IPs that need to be allowlisted for the Estuary Flow region of your account. For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://docs.neon.tech/docs/manage/projects#configure-ip-allow). ## Create a Postgres Source Connector in Estuary Flow 1. In the Estuary Flow web UI, select **Sources** from the left navigation bar and click **New Capture**. 2. In the connector catalog, choose **Neon PostgreSQL** and click **Connect**. 3. Enter the connection details for your Neon database. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Your connection string will look like this: ```bash postgres://cdc_role:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Enter the details for **your connection string** into the source connector fields. Based on the sample connection string above, the values would be specified as shown below. Your values will differ. - **Name:**: Name of the Capture connector - **Server Address**: ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432 - **User**: cdc_role - **Password**: Click **Add a new secret...**, then specify a name for that secret and `AbC123dEf` as its value - **Database**: dbname 4. Click **Next**. Estuary Flow will now scan the source database for all the tables that can be replicated. Select one or more tables by checking the checkbox next to their name. Optionally, you can change the name of the destination name for each table. You can also take a look at the schema of each stream by clicking on the **Collection** tab. 5. Click **Save and Publish** to provision the connector and kick off the automated backfill process. ## Previewing the Data Once the connector is up and running state, navigate to the Collections page in the Estuary Flow dashboard and click on the collection being filled by your capture. --- # Source: https://neon.com/llms/guides-logical-replication-fivetran.txt # Replicate data with Fivetran > The document outlines the process for setting up logical replication of data from Neon to Fivetran, detailing the necessary configurations and steps to enable seamless data integration between the two platforms. ## Source - [Replicate data with Fivetran HTML](https://neon.com/docs/guides/logical-replication-fivetran): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Fivetran](https://fivetran.com/) is an automated data movement platform that helps you centralize data from disparate sources, which you can then manage directly from your browser. Fivetran extracts your data and loads it into your data destination. In this guide, you will learn how to define a Neon Postgres database as a data source in Fivetran so that you can replicate data to one or more of Fivetran's supported destinations. ## Prerequisites - A [Fivetran account](https://fivetran.com/) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be temporarily dropped before automatically reconnecting. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ## Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ## Create a publication Create the Postgres publication. Include all tables you want to replicate as part of the publication: ```sql CREATE PUBLICATION fivetran_pub FOR TABLE ; ``` The publication name is customizable. Refer to the [Postgres docs](https://www.postgresql.org/docs/current/logical-replication-publication.html) if you need to add or remove tables from your publication. ## Create a replication slot Fivetran requires a dedicated replication slot. Only one source should be configured to use this replication slot. Fivetran uses the `pgoutput` plugin in Postgres for decoding WAL changes into a logical replication stream. To create a replication slot called `fivetran_slot` that uses the `pgoutput` plugin, run the following command on your database using your replication role: ```sql SELECT pg_create_logical_replication_slot('fivetran_pgoutput_slot', 'pgoutput'); ``` The name assigned to the replication slot is `fivetran_pgoutput_slot`. You will need to provide this name when you set up your Fivetran source. ## Create a Postgres source in Fivetran 1. Log in to your [Fivetran](https://fivetran.com/) account. 1. On the **Select your datasource** page, search for the **PostgreSQL** source and click **Set up**. 1. In your connector setup form, enter a value for **Destination Schema Prefix**. This prefix applies to each replicated schema and cannot be changed once your connector is created. In this example, we'll use `neon` as the prefix. 1. Enter the connection details for your Neon database. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). For example, let's say this is your connection string: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` From this string, the values in the Fivetran **Create a source** dialog would show as below. Your actual values will differ, with the exception of the port number. - **Host**: ep-cool-darkness-123456.us-east-2.aws.neon.tech - **Port**: 5432 - **Username**: alex - **Password**: AbC123dEf - **Database Name**: dbname 1. For **Connection Method**, select **Logical replication of the WAL using the pgoutput plugin** and enter values for the **Replication Slot** and **Publication Name**. You deifned these values earlier (`fivetran_pgoutput_slot` and `fivetran_pub`, respectively). 1. If you are using Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, add Fivetran's IPs to your allowlist in Neon. For instructions, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). You'll need to do this before you can validate your connection in the next step. If you are not using Neon's **IP Allow** feature, you can skip this step. 1. Click **Save & Test**. Fivetran tests and validates the connection to your database. Upon successful completion of the setup tests, you can sync your data using Fivetran. During the test, Fivetran asks you to confirm the certificate chain by selecting the certificate to use as the trust anchor. Select the `CN=ISRG Root X1, 0=Internet Security Research Group, C=US` option. This certificate is valid unitl until 2035-06-04. When the connection test is completed, you should see an **All connection tests passed!** message in Fivetran, as shown below: 1. Click **Continue**. 1. On the **Select Data to Sync** page, review the connector schema and select any columns you want to block or hash. 1. Click **Save & Continue**. 1. On the **How would you like to handle changes?** page, specify how you would like to handle future schema changes. For this example, we'll select **We will allow all new schemas, tables and columns**. Choose the option that best fits your organization's requirements. 1. Click **Continue**. Your data is now ready to sync. 1. Click **Start Initial Sync** to enable syncing. ## References - [Fivetran Generic PostgreSQL Setup Guide](https://fivetran.com/docs/databases/postgresql/setup-guide) --- # Source: https://neon.com/llms/guides-logical-replication-guide.txt # Get started with logical replication > The document outlines the steps for setting up logical replication in Neon, detailing how to configure a publisher and subscriber to replicate data changes between databases. ## Source - [Get started with logical replication HTML](https://neon.com/docs/guides/logical-replication-guide): The original HTML version of this documentation Neon's logical replication feature, available to all Neon users, allows you to replicate data to and from your Neon Postgres database: - Stream data from your Neon database to external destinations, enabling Change Data Capture (CDC) and real-time analytics. External sources might include data warehouses, analytical database services, real-time stream processing systems, messaging and event-streaming platforms, and external Postgres databases, among others. See [Replicate data from Neon](https://neon.com/docs/guides/logical-replication-guide#replicate-data-from-neon). - Perform live migrations to Neon from external sources such as AWS RDS and Google Cloud SQL — or any platform that runs Postgres. See [Replicate data to Neon](https://neon.com/docs/guides/logical-replication-guide#replicate-data-to-neon). - Replicate data from one Neon project to another for Neon project, account, Postgres version, or region migration. See [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). Logical replication in Neon works like it does on any standard Postgres installation. It uses a publisher-subscriber model to replicate data from the source database to the destination database. Neon can act as a publisher or subscriber. Replication starts by copying a snapshot of the data from the publisher to the subscriber. Once this is done, subsequent changes are sent to the subscriber as they occur in real-time. To learn more about Postgres logical replication, see the following topics. ## Learn about logical replication - [Logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts): Learn about Postgres logical replication concepts - [Logical replication commands](https://neon.com/docs/guides/logical-replication-manage): Commands for managing your logical replication configuration - [Logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon): Information about logical replication specific to Neon - [Managing schema changes](https://neon.com/docs/guides/logical-replication-schema-changes): Learn about managing schema changes in a logical replication setup To get started, jump into one of our step-by-step logical replication guides. ## Replicate data from Neon - [Airbyte](https://neon.com/docs/guides/logical-replication-airbyte): Replicate data from Neon with Airbyte - [Bemi](https://neon.com/docs/guides/bemi): Create an automatic audit trail with Bemi - [ClickHouse](https://docs.peerdb.io/mirror/cdc-neon-clickhouse): Change Data Capture from Neon to ClickHouse with PeerDB (PeerDB docs) - [Confluent (Kafka)](https://neon.com/docs/guides/logical-replication-kafka-confluent): Replicate data from Neon with Confluent (Kafka) - [Decodable](https://neon.com/docs/guides/logical-replication-decodable): Replicate data from Neon with Decodable - [Estuary Flow](https://neon.com/docs/guides/logical-replication-estuary-flow): Replicate data from Neon with Estuary Flow - [Fivetran](https://neon.com/docs/guides/logical-replication-fivetran): Replicate data from Neon with Fivetran - [Materialize](https://neon.com/docs/guides/logical-replication-materialize): Replicate data from Neon to Materialize - [Neon to Neon](https://neon.com/docs/guides/logical-replication-neon-to-neon): Replicate data from Neon to Neon - [Neon to PostgreSQL](https://neon.com/docs/guides/logical-replication-postgres): Replicate data from Neon to PostgreSQL - [Prisma Pulse](https://neon.com/docs/guides/logical-replication-prisma-pulse): Stream database changes in real-time with Prisma Pulse - [Sequin](https://neon.com/docs/guides/sequin): Stream data from platforms like Stripe, Linear, and GitHub to Neon - [Snowflake](https://neon.com/docs/guides/logical-replication-airbyte-snowflake): Replicate data from Neon to Snowflake with Airbyte - [Inngest](https://neon.com/docs/guides/logical-replication-inngest): Replicate data from Neon to Inngest ## Replicate data to Neon - [AlloyDB](https://neon.com/docs/guides/logical-replication-alloydb): Replicate data from AlloyDB to Neon - [Azure PostgreSQL](https://neon.com/docs/import/migrate-from-azure-postgres): Replicate data from Azure PostgreSQL to Neon - [Cloud SQL](https://neon.com/docs/guides/logical-replication-cloud-sql): Replicate data from Cloud SQL to Neon - [Neon to Neon](https://neon.com/docs/guides/logical-replication-neon-to-neon): Replicate data from Neon to Neon - [PostgreSQL to Neon](https://neon.com/docs/guides/logical-replication-postgres-to-neon): Replicate data from PostgreSQL to Neon - [RDS](https://neon.com/docs/guides/logical-replication-rds-to-neon): Replicate data from AWS RDS PostgreSQL to Neon - [Supabase](https://neon.com/docs/guides/logical-replication-supabase-to-neon): Replicate data from Supabase to Neon --- # Source: https://neon.com/llms/guides-logical-replication-inngest.txt # Replicate data with Inngest > The document outlines the process for using Inngest to replicate data in Neon, detailing the steps for setting up logical replication to ensure data consistency and availability across different environments. ## Source - [Replicate data with Inngest HTML](https://neon.com/docs/guides/logical-replication-inngest): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Inngest](https://www.inngest.com?utm_source=neon&utm_medium=logical-replication-guide) is a durable workflow platform that allows you to trigger workflow based on Neon database changes. With its native Neon integration, it is the easiest way to set up data replication with custom transformations or 3rd party API destinations (ex, Neon to Amplitude, Neon to S3). In this guide, you will learn how to configure your Inngest account for ingesting changes from your Neon database, enabling you to replicate data from Neon to Inngest workflows. ## Prerequisites - A [Inngest account](https://www.inngest.com?utm_source=neon&utm_medium=logical-replication-guide) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enabling Logical Replication on your database The Inngest Integration relies on Neon's Logical Replication feature to get notified upon database changes. Navigate to your Neon Project using the Neon Console and open the **Settings** > **Logical Replication** page. From here, follow the instructions to enable Logical Replication: ## Configuring the Inngest integration Your Neon database is now ready to work with Inngest. To configure the Inngest Neon Integration, navigate to the Inngest Platform, open the [Integrations page](https://app.inngest.com/settings/integrations?utm_source=neon&utm_medium=trigger-serverless-functions-guide), and follow the instructions of the [Neon Integration installation wizard](https://app.inngest.com/settings/integrations/neon/connect?utm_source=neon&utm_medium=trigger-serverless-functions-guide): The Inngest Integration requires Postgres admin credentials to complete its setup. _These credentials are not stored and are only used during the installation process_. You can find your admin Neon database connection credentials by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For details, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Example: Replicating data to Amplitude The below example demonstrates how to replicate new `users` table rows to Amplitude using Amplitude's API. Once the Inngest integration is installed, a flow of `"db/*"` events will be created when updates are made to your database. For example, if you create a new user in your database, a `"db/users.updated"` [event](https://www.inngest.com/docs/features/events-triggers?utm_source=neon&utm_medium=logical-replication-guide) will be created: ```json { "name": "db/users.updated", "data": { "new": { "id": { "data": 2, "encoding": "i" }, "name": { "data": "Charly", "encoding": "t" }, "email": { "data": "charly@inngest.com", "encoding": "t" } }, "table": "users", "txn_commit_time": "2024-09-24T14:41:19.75149Z", "txn_id": 36530520 }, "ts": 1727146545006 } ``` Such events can be used to trigger Inngest functions to transform and replicate data to external destinations like Amplitude: ```typescript // inngest/functions/users-replication.ts import { inngest } from './client'; export const updateAmplitudeUserMapping = inngest.createFunction( { id: 'update-amplitude-user-mapping' }, { event: 'db/users.updated' }, async ({ event, step }) => { // Extract the user data from the event const { data } = event; const { id, email } = data.new; // Update the user mapping in Amplitude await step.run('update-amplitude-user-mapping', async () => { const response = await fetch( `https://api.amplitude.com/usermap?mapping=[{"user_id":"${id}", "global_user_id": "${email}"}]&api_key=${process.env.AMPLITUDE_API_KEY}` ); if (!response.ok) { throw new Error(`Failed to send user data to Amplitude: ${response.statusText}`); } return response.json(); }); return { success: true }; } ); ``` --- # Source: https://neon.com/llms/guides-logical-replication-kafka-confluent.txt # Replicate data with Kafka (Confluent) and Debezium > This document guides Neon users on setting up data replication using Kafka (Confluent) and Debezium, detailing the configuration and integration processes necessary for implementing logical replication in a Neon environment. ## Source - [Replicate data with Kafka (Confluent) and Debezium HTML](https://neon.com/docs/guides/logical-replication-kafka-confluent): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. Confluent Cloud is a fully managed, cloud-native real-time data streaming service, built on Apache Kafka. It allows you to stream data from various sources, including Postgres, and build apps that consume messages from an Apache Kafka cluster. In this guide, you will learn how to stream data from a Neon Postgres database to a Kafka cluster in Confluent Cloud. You will use the [PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-postgresql-cdc-source-debezium.html) to read Change Data Capture (CDC) events from the Write-Ahead Log (WAL) of your Neon database in real-time. The connector will write events to a Kafka stream and auto-generate a Kafka topic. The connector performs an initial snapshot of the table and then streams any future change events. **Note**: Confluent Cloud Connectors can be set up using the [Confluent Cloud UI](https://confluent.cloud/home) or the [Confluent command-line interface (CLI)](https://docs.confluent.io/confluent-cli/current/overview.html). This guide uses the Confluent Cloud UI. ## Prerequisites - A [Confluent Cloud](https://www.confluent.io/confluent-cloud) account - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon **Important**: Enabling logical replication modifies the PostgreSQL `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, which means that active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Create a publication In this example, we'll create a publication for a `users` table in the `public` schema of your Neon database. 1. Create the `users` table in your Neon database. You can do this via the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or by connecting to your Neon database from an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor). ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL, email VARCHAR(100) NOT NULL ); ``` 2. Create a publication for the `users` table: ```sql CREATE PUBLICATION users_publication FOR TABLE users; ``` This command creates a publication, named `users_publication`, which will include all changes to the `users` table in your replication stream. ## Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ## Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ## Create a replication slot The Debezium connector requires a dedicated replication slot. Only one source should be configured to use this replication slot. To create a replication slot called `debezium`, run the following command on your database using your replication role: ```sql SELECT pg_create_logical_replication_slot('debezium', 'pgoutput'); ``` - `debezium` is the name assigned to the replication slot. You will need to provide the slot name when you set up your source connector in Confluent. - `pgoutput` is the logical decoder plugin used in this example. Neon supports both `pgoutput` and `wal2json` decoder plugins. ## Set up a Kafka cluster in Confluent Cloud 1. Sign in to Confluent Cloud at [https://confluent.cloud](https://confluent.cloud). 2. Click **Add cluster**. 3. On the **Create cluster** page, for the **Basic cluster**, select **Begin configuration**. 4. On the **Region/zones** page, choose a cloud provider, a region, and select a single availability zone. 5. Select **Continue**. 6. Specify your payment details. You can select **Skip payment** for now if you're just trying out the setup. 7. Specify a cluster name, review the configuration and cost information, and select **Launch cluster**. In this example, we use `cluster_neon` as the cluster name. It may take a few minutes to provision your cluster. After the cluster has been provisioned, the **Cluster Overview** page displays. ## Set up a source connector To set up a Postgres CDC source connector for Confluent Cloud: 1. On the **Cluster Overview** page, under **Set up connector**, select **Get started**. 2. On the **Connector Plugins** page, enter `Postgres` into the search field. 3. Select the **Postgres CDC Source** connector. This is the [PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-postgresql-cdc-source-debezium.html). This connector will take a snapshot of the existing data and then monitor and record all subsequent row-level changes to that data. 4. On the **Add Postgres CDC Source connector** page: - Select the type of access you want to grant the connector. For the purpose of this guide, we'll select **Global access**, but if you are configuring a production pipeline, Confluent recommends **Granular access**. - Click the **Generate API key & download** button to generate an API key and secret that your connector can use to communicate with your Kafka cluster. Your applications will need this API key and secret to make requests to your Kafka cluster. Store the API key and secret somewhere safe. This is the only time you'll see the secret. Click **Continue**. 5. On the **Add Postgres CDC Source connector** page: - Add the connection details for your Neon database. You can find your admin Neon database connection credentials by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). Your connection string will look something like this: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Enter the details for **your connection string** into the source connector fields. Based on the sample connection string above, the values would be specified as shown below. Your values will differ. - **Database name**: `dbname` - **Database server name**: `neon_server` (This is a user-specified value that will represent the logical name of your Postgres server. Confluent uses this name as a namespace in all Kafka topic and schema names. It is also used for Avro schema namespaces if the Avro data format is used. The Kafka topic will be created with the prefix `database.server.name`. Only alphanumeric characters, underscores, hyphens, and dots are allowed.) - **SSL mode**: `require` - **Database hostname** `ep-cool-darkness-123456.us-east-2.aws.neon.tech` (this example shows the portion of a Neon connection string forms the database hostname) - **Database port**: `5432` (Neon uses port `5432`) - **Database username**: `alex` - **Database Password** `AbC123dEf` - If you use Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, you will need to add the Confluent cluster static IP addresses to your allowlist. For information about configuring allowed IPs in Neon, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). If you do not use Neon's **IP Allow** feature, you can skip this step. Click **Continue**. 6. Under **Output Kafka record value format**, select an output format for Kafka record values. The default is `JSON`, so we'll use that format in this guide. Other supported values include `AVRO`, `JSON_SR`, and `PROTOBUF`, which are schema-based message formats. If you use any of these, you must also configure a [Confluent Cloud Schema Registry](https://docs.confluent.io/cloud/current/sr/index.html). Expand the **Show advanced configurations** drop-down and set the following values: - Under **Advanced configuration** - Ensure **Slot name** is set to `debezium`. This is the name of the replication slot you created earlier. - Set the **Publication name** to `users_publication`, which is the name of the publication you created earlier. - Set **Publication auto-create** mode to `disabled`. You've already created your publication. - Under **Database details**, set **Tables included** to `public.users`, which is the name of the Neon database table you are replicating from. Click **Continue**. 7. For **Connector sizing**, accept the default for the maximum number of [Tasks](https://docs.confluent.io/platform/current/connect/index.html#tasks). Tasks can be scaled up at a later time for additional throughput capacity. Click **Continue**. 8. Adjust your **Connector name** if desired, and review your **Connector configuration**, which is provided in `JSON` format, as shown below. We'll use the default connector name in this guide. ```json { "connector.class": "PostgresCdcSource", "name": "PostgresCdcSourceConnector_0", "kafka.auth.mode": "KAFKA_API_KEY", "kafka.api.key": "2WY3UABFDN7DDFIV", "kafka.api.secret": "****************************************************************", "schema.context.name": "default", "database.hostname": "ep-cool-darkness-123456.us-east-2.aws.neon.tech", "database.port": "5432", "database.user": "alex", "database.password": "************", "database.dbname": "dbname", "database.server.name": "neon_server", "database.sslmode": "require", "publication.name": "users_publication", "publication.autocreate.mode": "all_tables", "snapshot.mode": "initial", "tombstones.on.delete": "true", "plugin.name": "pgoutput", "slot.name": "debezium", "poll.interval.ms": "1000", "max.batch.size": "1000", "event.processing.failure.handling.mode": "fail", "heartbeat.interval.ms": "0", "provide.transaction.metadata": "false", "decimal.handling.mode": "precise", "binary.handling.mode": "bytes", "time.precision.mode": "adaptive", "cleanup.policy": "delete", "hstore.handling.mode": "json", "interval.handling.mode": "numeric", "schema.refresh.mode": "columns_diff", "output.data.format": "JSON", "after.state.only": "true", "output.key.format": "JSON", "json.output.decimal.format": "BASE64", "tasks.max": "1" } ``` Click **Continue** to provision the connector, which may take a few monents to complete. ## Verify your Kafka stream To verify that events are now being published to a Kafka stream in Confluent: 1. Insert a row into your `users` table from the Neon SQL Editor or a `psql` client connect to your Neon database. For example: ```sql -- Insert a new user INSERT INTO users (username, email) VALUES ('Zhang', 'zhang@example.com'); ``` 2. In Confluent Cloud, navigate to your cluster (`cluster_neon` in this guide) and select **Topics** > **neon_server.public.users** > **Messages**. Your newly inserted data should appear at the top of the list of messages. ## Next steps With events now being published to a Kafka stream, you can now set up a connection between Confluent and a supported consumer. This is quite simple using a Confluent Connector. For example, you can stream data to [Databricks](https://docs.confluent.io/cloud/current/connectors/cc-databricks-delta-lake-sink/databricks-aws-setup.html#), [Snowflake](https://docs.confluent.io/cloud/current/connectors/cc-snowflake-sink.html), or one of the other supported consumers. Refer to the Confluent documentation for connector-specific instructions. ## References - [Quick Start for Confluent Cloud](https://docs.confluent.io/cloud/current/get-started/index.html#cloud-quickstart) - [Publications - PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-publication.html) --- # Source: https://neon.com/llms/guides-logical-replication-manage.txt # Logical replication commands > The document outlines commands for managing logical replication in Neon, detailing how to set up, monitor, and control replication slots and subscriptions within the platform. ## Source - [Logical replication commands HTML](https://neon.com/docs/guides/logical-replication-manage): The original HTML version of this documentation This topic provides commands for managing publications, subscriptions, and replication slots. For step-by-step setup instructions, refer to our [logical replication guides](https://neon.com/docs/guides/logical-replication-guide). ## Publications This section outlines how to manage **publications** in your replication setup. ### Create a publication This command creates a publication named `my_publication` that will track changes made to the `users` table: ```sql CREATE PUBLICATION my_publication FOR TABLE users; ``` This command creates a publication that publishes all changes in two tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` This command creates a publication that only publishes `INSERT` and `UPDATE` operations. Delete operations will not be published. ```sql CREATE PUBLICATION my_publication FOR TABLE users WITH (publish = 'insert,update'); ``` ### Add a table to a publication This command adds a table to a publication: ```sql ALTER PUBLICATION my_publication ADD TABLE sales; ``` ### Remove a table from a publication This command removes a table from a publication: ```sql ALTER PUBLICATION my_publication DROP TABLE sales; ``` ### Remove a publication This command removes a publication: ```sql DROP PUBLICATION IF EXISTS my_publication; ``` ### Recreate a publication This command recreates a publication within a single transaction: ```sql BEGIN; -- drop the publication DROP PUBLICATION IF EXISTS my_publication; -- re-create the publication CREATE PUBLICATION my_publication; COMMIT; ``` ## Subscriptions This section outlines how to manage **subscriptions** in your replication setup. ### Create a subscription Building on the `my_publication` example in the preceding section, here's how you can create a subscription: ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://username:password@host:port/dbname' PUBLICATION my_publication; ``` A subscription requires a unique name, a database connection string, the name and password of your replication role, and the name of the publication that it subscribes to. In the example above, `my_subscription` is the name of the subscription that connects to a publication named `my_publication`. In the example above, you would replace the connection details with your Neon database connection string. You can find your Neon connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. ### Create a subscription with two publications This command creates a subscription that receives data from two publications: ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://username:password@host:port/dbname' PUBLICATION my_publication, sales_publication; ``` A single subscriber can maintain multiple subscriptions, including multiple subscriptions to the same publisher. ### Create a subscription to be enabled later This command creates a subscription with `enabled = false` so that you can enable the scription at a later time: ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://username:password@host:port/dbname' PUBLICATION my_publication WITH (enabled = false); ``` ### Change the publication subscribed to This command modifies an existing subscription to set it to a different publication: ```sql ALTER SUBSCRIPTION my_subscription SET PUBLICATION new_new_publication; ``` ### Change the subscription connection This command updates the connection details for a subscription: ```sql ALTER SUBSCRIPTION subscription_name CONNECTION 'new_connection_string'; ``` ### Disable a subscription This command disables an existing subscription: ```sql ALTER SUBSCRIPTION my_subscription DISABLE; ``` ### Drop a subscription This command drops an existing subscription: ```sql DROP SUBSCRIPTION my_subscription; ``` ## Replication slots Replication slots are created on the publisher database to track replication progress, ensuring that no data in the WAL is purged before the subscriber has successfully replicated it. This mechanism serves to maintain data consistency and prevent data loss in cases of network interruption or subscriber downtime. **Important**: To prevent storage bloat, **Neon automatically removes _inactive_ replication slots after a period of time if there are other _active_ replication slots**. If you have or intend on having more than one replication slot, please see [Unused replication slots](https://neon.com/docs/guides/logical-replication-neon#unused-replication-slots) to learn more. ### Create a replication slot Replication slots are typically created automatically with new subscriptions, but they can be created manually using the `pg_create_logical_replication_slot` function. Some "subscriber" data services and platforms require that you create a dedicated replication slot. This is accomplished using the following syntax: ```sql SELECT pg_create_logical_replication_slot('my_replication_slot', 'pgoutput'); ``` The first value, `my_replication_slot` is the name given to the replication slot. The second value is the [decoder plugin](https://neon.com/docs/guides/logical-replication-manage#decoder-plugins) the slot should use. The `max_replication_slots` configuration parameter defines the maximum number of replication slots that can be used to manage database replication connections. Each replication slot tracks changes in the publisher database to ensure that the connected subscriber stays up to date. You'll want a replication slot for each replication connection. For example, if you expect to have 10 separate subscribers replicating from your database, you would set `max_replication_slots` to 10 to accommodate each connection. The `max_replication_slots` configuration parameter on Neon is set to `10` by default. ```ini max_replication_slots = 10 ``` ### Remove a replication slot To drop a logical replication slot that you created, you can use the `pg_drop_replication_slot()` function. For example, if you've already created a replication slot named `my_replication_slot` using `pg_create_logical_replication_slot()`, you can drop it by executing the following SQL command: ```sql SELECT pg_drop_replication_slot('my_replication_slot'); ``` This command removes the specified replication slot (`my_replication_slot` in this case) from your database. It's important to ensure that the replication slot is no longer in use or required before dropping it, as this action is irreversible and could affect replication processes relying on this slot. ## Data Definition Language (DDL) operations Logical replication in Postgres primarily handles Data Manipulation Language (DML) operations like `INSERT`, `UPDATE`, and `DELETE`. However, it does not automatically replicate Data Definition Language (DDL) operations such as `CREATE TABLE`, `ALTER TABLE`, or `DROP TABLE`. This means that schema changes in the publisher database are not directly replicated to the subscriber database. Manual intervention is required to replicate DDL changes. This can be done by applying the DDL changes separately in both the publisher and subscriber databases or by using third-party tools that can handle DDL replication. ## Monitoring replication To ensure that your logical replication setup is running as expected, you should monitor replication processes regularly. The [pg_stat_replication](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW) view displays information about each active replication connection to the publisher. ```sql SELECT * FROM pg_stat_replication; ``` The view provides details like the state of the replication, the last received WAL location, sent location, write location, and the delay between the publisher and subscriber. Additionally, the [pg_replication_slots](https://www.postgresql.org/docs/current/view-pg-replication-slots.html) view shows information about the current replication slots on the publisher, including their size. ```sql SELECT * FROM pg_replication_slots; ``` It's important to keep an eye on replication lag, which indicates how far behind the subscriber is from the publisher. A significant replication lag could mean that the subscriber isn't receiving updates in a timely manner, which could lead to data inconsistencies. ## References - [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html) - [ALTER PUBLICATION](https://www.postgresql.org/docs/current/sql-alterpublication.html) - [DROP PUBLICATION](https://www.postgresql.org/docs/current/sql-droppublication.html) - [CREATE SUBSCRIPTION](https://www.postgresql.org/docs/current/sql-createsubscription.html) - [ALTER SUBSCRIPTION](https://www.postgresql.org/docs/current/sql-altersubscription.html) - [DROP SUBSCRIPTION](https://www.postgresql.org/docs/current/sql-dropsubscription.html) - [wal2json](https://github.com/eulerto/wal2json) - [pg_stat_replication](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW) - [pg_replication_slots](https://www.postgresql.org/docs/current/view-pg-replication-slots.html) --- # Source: https://neon.com/llms/guides-logical-replication-materialize.txt # Replicate data to Materialize > This document guides Neon users on configuring logical replication to stream data from Neon to Materialize, detailing the necessary steps and configurations for seamless data integration between the two platforms. ## Source - [Replicate data to Materialize HTML](https://neon.com/docs/guides/logical-replication-materialize): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from your Neon Postgres database to external destinations. [Materialize](https://materialize.com/) is a data warehouse for operational workloads, purpose-built for low-latency applications. You can use it to process data at speeds and scales not possible in traditional databases, but without the cost, complexity, or development time of most streaming engines. In this guide, you will learn how to stream data from your Neon Postgres database to Materialize using the Materialize [PostgreSQL source](https://materialize.com/docs/sql/create-source/postgres/). ## Prerequisites - A [Materialize account](https://materialize.com/register/). - A [Neon account](https://console.neon.tech/). - Optionally, you can install the [psql](https://www.postgresql.org/docs/current/logical-replication.html) command line utility for running commands in both Neon and Materialize. Alternatively, you can run commands from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) and Materialize **SQL Shell**, which require no installation or setup. - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. ## Enable logical replication **Important**: Enabling logical replication modifies the PostgreSQL `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning that active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query: ```sql SHOW wal_level; wal_level ----------- logical ``` ## Create a publication After logical replication is enabled in Neon, the next step is to create a publication for the tables that you want to replicate to Materialize. 1. From a `psql` client connected to your Neon database or from the **Neon SQL Editor**, set the [replica identity](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY) to `FULL` for each table that you want to replicate to Materialize: ```sql ALTER TABLE REPLICA IDENTITY FULL; ``` `REPLICA IDENTITY FULL` ensures that the replication stream includes the previous data of changed rows, in the case of `UPDATE` and `DELETE` operations. This setting allows Materialize to ingest Postgres data with minimal in-memory state. 2. Create a [publication](https://www.postgresql.org/docs/current/logical-replication-publication.html) with the tables you want to replicate: For specific tables: ```sql CREATE PUBLICATION mz_source FOR TABLE ; ``` The `mz_source` publication will contain the set of change events generated from the specified tables and will later be used to ingest the replication stream. Be sure to include only the tables you need. If the publication includes additional tables, Materialize wastes resources on ingesting and then immediately discarding the data from those tables. ## Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ## Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ## Allow inbound traffic If you use Neon's **IP Allow** feature to limit IP addresses that can connect to Neon, you will need to allow inbound traffic from Materize IP addresses. If you are currently not limiting IP address access in Neon, you can skip this step. 1. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, run this command to find the static egress IP addresses for the Materialize region you are running in: ```sql SELECT * FROM mz_egress_ips; ``` 2. In your Neon project, add the IPs to your **IP Allow** list, which you can find in your project's settings. For instructions, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## Create an ingestion cluster In Materialize, a [cluster](https://materialize.com/docs/get-started/key-concepts/#clusters) is an isolated environment, similar to a virtual warehouse in Snowflake. When you create a cluster, you choose the size of its compute resource allocation based on the work you need the cluster to do, whether ingesting data from a source, computing always-up-to-date query results, serving results to clients, or a combination. In this case, you'll create 1 new cluster containing 1 medium replica for ingesting source data from your Neon Postgres database. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, run the `CREATE CLUSTER` command to create the new cluster: ```sql CREATE CLUSTER ingest_postgres SIZE = 'medium'; ``` Materialize recommends starting with a medium [size](https://materialize.com/docs/sql/create-cluster/#size) replica or larger. This helps Materialize quickly process the initial snapshot of the tables in your publication. Once the snapshot is finished, you can right-size the cluster. ## Start ingesting data Now that you've configured your database network and created an ingestion cluster, you can connect Materialize to your Neon Postgres database and start ingesting data. 1. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, use the [CREATE SECRET](https://materialize.com/docs/sql/create-secret/) command to securely store the password for the Postgres role you created earlier: ```sql CREATE SECRET pgpass AS ''; ``` You can access the password for your Neon Postgres role from the to open **Connect to your database** modal — click the **Connect** button on your **Project Dashboard** to open the modal. 2. Use the [CREATE CONNECTION](https://materialize.com/docs/sql/create-connection/) command to create a connection object with access and authentication details for Materialize to use: ```sql CREATE CONNECTION pg_connection TO POSTGRES ( HOST '', PORT 5432, USER '', PASSWORD SECRET pgpass, SSL MODE 'require', DATABASE '' ); ``` You can find the connection details for your replication role in the **Connect to your database** modal on your **Project Dashboard** — click the **Connect** button. **Important**: Use a **direct connection** to your compute endpoint, not a pooled connection. Logical replication requires a persistent connection and is not compatible with connection poolers. When copying your connection string from Neon, make sure it does not include `-pooler` in the hostname. For more information about connection pooling and when to use direct connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). A Neon connection string looks like this: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` - Replace `` with your Neon hostname (e.g., `ep-cool-darkness-123456.us-east-2.aws.neon.tech`) - Replace `` with the name of your Postgres role (e.g., `alex`) - Replace `` with the name of the database containing the tables you want to replicate to Materialize (e.g., `dbname`) 3. Use the [CREATE SOURCE](https://materialize.com/docs/sql/create-source/) command to connect Materialize to your Neon Postgres database and start ingesting data from the publication you created earlier: ```sql CREATE SOURCE mz_source IN CLUSTER ingest_postgres FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source') FOR TABLES , ; ``` **Tip** Tips: - To ingest data from specific schemas, you can use `FOR SCHEMAS (,)`. - After creating a source, you can incorporate upstream schema changes for specific replicated tables using the `ALTER SOURCE...{ADD | DROP} SUBSOURCE` syntax. ## Check the ingestion status Before Materialize starts consuming a replication stream, it takes a snapshot of the tables in your publication. Until this snapshot is complete, Materialize won't have the same view of your data as your Postgres database. In this step, you'll verify that the source is running and then check the status of the snapshotting process. 1. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, use the [mz_source_statuses](https://materialize.com/docs/sql/system-catalog/mz_internal/#mz_source_statuses) table to check the overall status of your source: ```sql WITH source_ids AS (SELECT id FROM mz_sources WHERE name = 'mz_source') SELECT * FROM mz_internal.mz_source_statuses JOIN ( SELECT referenced_object_id FROM mz_internal.mz_object_dependencies WHERE object_id IN (SELECT id FROM source_ids) UNION SELECT id FROM source_ids ) AS sources ON mz_source_statuses.id = sources.referenced_object_id; ``` For each subsource, make sure the status is running. If you see stalled or failed, there's likely a configuration issue for you to fix. Check the error field for details and fix the issue before moving on. If the status of any subsource is starting for more than a few minutes, contact [Materialize support](https://materialize.com/docs/support/). 2. Once the source is running, use the [mz_source_statistics](https://materialize.com/docs/sql/system-catalog/mz_internal/#mz_source_statistics) table to check the status of the initial snapshot: ```sql WITH source_ids AS (SELECT id FROM mz_sources WHERE name = 'mz_source') SELECT sources.object_id, bool_and(snapshot_committed) AS snapshot_committed FROM mz_internal.mz_source_statistics JOIN ( SELECT object_id, referenced_object_id FROM mz_internal.mz_object_dependencies WHERE object_id IN (SELECT id FROM source_ids) UNION SELECT id, id FROM source_ids ) AS sources ON mz_source_statistics.id = sources.referenced_object_id GROUP BY sources.object_id; object_id | snapshot_committed ----------|------------------ u144 | t (1 row) ``` Once `snapshot_commited` is `t`, move on to the next step. Snapshotting can take between a few minutes to several hours, depending on the size of your dataset and the size of the cluster replica you chose for your `ingest_postgres` cluster. ## Right-size the cluster After the snapshotting phase, Materialize starts ingesting change events from the Postgres replication stream. For this work, Materialize generally performs well with an `xsmall` replica, so you can resize the cluster accordingly. 1. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, use the [ALTER CLUSTER](https://materialize.com/docs/sql/alter-cluster/) command to downsize the cluster to `xsmall`: ```sql ALTER CLUSTER ingest_postgres SET (SIZE 'xsmall'); ``` Behind the scenes, this command adds a new `xsmall` replica and removes the `medium` replica. 2. Use the [SHOW CLUSTER REPLICAS](https://materialize.com/docs/sql/show-cluster-replicas/) command to check the status of the new replica: ```sql SHOW CLUSTER REPLICAS WHERE cluster = 'ingest_postgres'; cluster | replica | size | ready -----------------+---------+--------+------- ingest_postgres | r1 | xsmall | t (1 row) ``` 3. Going forward, you can verify that your new replica size is sufficient as follows: a. From a `psql` client connected to Materialize or from the Materialize **SQL Shell**, get the replication slot name associated with your Postgres source from the [mz_internal.mz_postgres_sources](https://materialize.com/docs/sql/system-catalog/mz_internal/#mz_postgres_sources) table: ```sql SELECT d.name AS database_name, n.name AS schema_name, s.name AS source_name, pgs.replication_slot FROM mz_sources AS s JOIN mz_internal.mz_postgres_sources AS pgs ON s.id = pgs.id JOIN mz_schemas AS n ON n.id = s.schema_id JOIN mz_databases AS d ON d.id = n.database_id; ``` b. From a `psql` client connected to your Neon database or from the **Neon SQL Editor**, check the replication slot lag, using the replication slot name from the previous step: ```sql SELECT pg_size_pretty(pg_current_wal_lsn() - confirmed_flush_lsn) AS replication_lag_bytes FROM pg_replication_slots WHERE slot_name = ''; ``` The result of this query is the amount of data your Postgres cluster must retain in its replication log because of this replication slot. Typically, this means Materialize has not yet communicated back to your Neon Postgres database that it has committed this data. A high value can indicate that the source has fallen behind and that you might need to scale up your ingestion cluster. ## Next steps With Materialize ingesting your Postgres data into durable storage, you can start exploring the data, computing real-time results that stay up-to-date as new data arrives, and serving results efficiently. - Explore your data with [SHOW SOURCES](https://materialize.com/docs/sql/show-sources) and [SELECT](https://materialize.com/docs/sql/select/). - Compute real-time results in memory with [CREATE VIEW](https://materialize.com/docs/sql/create-view/) and [CREATE INDEX](https://materialize.com/docs/sql/create-index/) or in durable storage with [CREATE MATERIALIZED VIEW](https://materialize.com/docs/sql/create-materialized-view/). - Serve results to a Postgres-compatible SQL client or driver with [SELECT](https://materialize.com/docs/sql/select/) or [SUBSCRIBE](https://materialize.com/docs/sql/subscribe/) or to an external message broker with [CREATE SINK](https://materialize.com/docs/sql/create-sink/). - Check out the [tools and integrations](https://materialize.com/docs/integrations/) supported by Materialize. --- # Source: https://neon.com/llms/guides-logical-replication-neon-to-neon.txt # Replicate data from one Neon project to another > The document outlines the process for setting up logical replication to transfer data between two Neon projects, detailing the necessary steps and configurations to achieve seamless data replication. ## Source - [Replicate data from one Neon project to another HTML](https://neon.com/docs/guides/logical-replication-neon-to-neon): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from one Neon project to another. This enables different usage scenarios, including: - **Cross-region replication**: Replicating data from a Neon project in one region to a Neon project in another region to support regional failover scenarios. - **Postgres version migration**: Moving data from one Postgres version to another; for example, from a Neon project that runs Postgres 16 to one that runs Postgres 17. - **Region migration**: Moving data from one region to another; for example, from a Neon project in one region to a Neon project in a different region. These are some common Neon-to-Neon replication scenarios. There may be others. You can follow the steps in this guide for any scenario that requires replicating data between different Neon projects. **Info** Replicating between databases on the same Neon project branch: **The procedure in this guide does not work for replicating between databases on the same Neon project branch**. That setup requires a slightly different publication and subscription configuration. For details, see [Replicating between databases on the same Neon project branch](https://neon.com/docs/guides/logical-replication-neon#replicating-between-databases-on-the-same-neon-project-branch). ## Prerequisites - A Neon project with a database containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A destination Neon project. - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). ## Prepare your source Neon database This section describes how to prepare your source Neon database (the publisher) for replicating data to your destination Neon database (the subscriber). ### Enable logical replication in the source Neon project In the Neon project containing your source database, enable logical replication. You only need to perform this step on the source Neon project. **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication restarts all computes in your Neon project, meaning that active connections will be dropped and have to reconnect. To enable logical replication: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query: ```sql SHOW wal_level; wal_level ----------- logical ``` ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your Neon destination database This section explains how to prepare your destination Neon Postgres database (the subscriber) to receive replicated data. For cross-region replication, be sure to create the destination Neon project in a different region than your source database. ### Prepare your database schema When configuring logical replication in Postgres, the tables in the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on the destination database. 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), `psql`, or another SQL client to connect to your destination database. 2. Create the subscription using the using a `CREATE SUBSCRIPTION` statement. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://neondb_owner:@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source Neon database where you defined the publication. - `publication_name`: The name of the publication you created on the source Neon database. 3. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; ``` The subscription (`my_subscription`) should be listed, confirming that your subscription has been created successfully. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert some rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 10 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete or in a failover situation, you can switch your application over to the destination database by swapping out your source database connection details for your destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-neon.txt # Logical replication in Neon > The document explains how to set up and use logical replication in Neon, detailing the steps to configure replication slots and manage data streaming between databases. ## Source - [Logical replication in Neon HTML](https://neon.com/docs/guides/logical-replication-neon): The original HTML version of this documentation This topic outlines information about logical replication specific to Neon, including important notices. ## Important notices To avoid potential issues, please review the following notices carefully before using logical replication in Neon. ### Neon as a publisher These notices apply when replicating data from Neon: - **Scale to zero**: Neon does not scale to zero a compute that has an active connection from a logical replication subscriber. In other words, a Neon Postgres instance with an active subscriber will not scale to zero, which may result in increased compute usage. For more information, see [Logical replication and scale to zero](https://neon.com/docs/guides/logical-replication-neon#logical-replication-and-scale-to-zero). - **Removal of inactive replication slots**: To prevent storage bloat, **Neon automatically removes _inactive_ replication slots after approximately 40 hours if there are other _active_ replication slots**. If you plan to have more than one subscriber, please read [Unused replication slots](https://neon.com/docs/guides/logical-replication-neon#unused-replication-slots) before you begin. - **Branch restore removes replication slots**: [Restoring a branch](https://neon.com/docs/guides/branch-restore) will delete all replication slots on that branch. Replication slots are not automatically re-created during the restore process. ### Neon as a subscriber - Before dropping a database in response to a user issued `DROP DATABASE` command or operation, Neon will drop any logical replication subscriptions defined in the database. - To prevent issues due to unintended duplication of logical replication subscriptions, subscriptions defined on a parent branch are not duplicated on child branches — they are dropped from child branches before the compute associated with the child branch starts. This applies to all branching contexts where logical replication subscriptions could be duplicated on a child branch, including creating a child branch, resetting a child branch, and restoring a child branch. ## Logical replication and scale to zero Neon's [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature suspends a compute after 300 seconds (5 minutes) of inactivity. In a logical replication setup, Neon does not scale to zero a compute that has an active connection from a logical replication subscriber. In other words, a compute with an active subscriber remains active at all times. Neon determines if there are active connections from a logical replication subscriber by checking for `walsender` processes on the Neon Postgres instance using the following query: ```sql SELECT * FROM pg_stat_replication WHERE application_name != 'walproposer'; ``` If the count is greater than 0, a Neon compute where the publishing Postgres instance runs will not be suspended. ## Unused replication slots To prevent storage bloat, **Neon automatically removes _inactive_ replication slots after approximately 40 hours if there are other _active_ replication slots**. If you have only one replication slot, and that slot becomes inactive, it will not be dropped because a single replication slot does not cause storage bloat. An inactive replication slot is one that doesn't acknowledge `flush_lsn` progress for more than approximately 40 hours. This is the same `flush_lsn` value found in the `pg_stat_replication` view in your Neon database. An _inactive_ replication slot can be the result of a dead subscriber, where the replication slot has not been removed after a subscriber is deactivated or becomes unavailable. An inactive replication slot can also result from a long replication delay configured on the subscriber. For example, subscribers like Fivetran or Airbyte let you to configure the replication frequency or set a replication delay to minimize usage. ### How to avoid removal of replication slots - If replication frequency configured on the subscriber is more than 40 hours, you can prevent replication slots from being dropped by changing the replication frequency to less than 40 hours. This will ensure that your subscriber reports `flush_lsn` progress more frequently than every 40 hours. If increasing replication frequency is not possible, please contact [Neon Support](https://neon.com/docs/introduction/support) for alternatives. - If using Debezium, set [flush.lsn.source](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-flush-lsn-source) to `true` to ensure that `flush_lsn` progress is being reported. For other subscriber platforms, check for an equivalent setting to make sure it's configured to acknowledge progress on the subscriber. ### What to do if your replication slot is removed If you find that a replication slot was removed and you need to add it back, please see [Create a replication slot](https://neon.com/docs/guides/logical-replication-neon#create-a-replication-slot) for instructions or refer to the replication slot creation instructions for your subscriber. ## Replication roles It is recommended that you create a dedicated Postgres role for replicating data from Neon to a subscriber. This role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon Console, CLI, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Roles created via SQL do not have this privilege, and the `REPLICATION` privilege cannot be granted. You can verify that your role has the `REPLICATION` privilege by running the following query: ```sql SELECT rolname, rolreplication FROM pg_roles WHERE rolname = ''; ``` ## Subscriber access A subscriber must be able to access the Neon database that is acting as a publisher. In Neon, no action is required unless you use Neon's **IP Allow** feature to limit IP addresses that can connect to Neon. If you use Neon's **IP Allow** feature: 1. Determine the IP address or addresses of the subscriber. 2. In your Neon project, add the IPs to your **IP Allow** list, which you can find in your project's settings. For instructions, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## Publisher access When replicating data to Neon, you may need to allow connections from Neon on the publisher platform or service. Neon uses 3 to 6 IP addresses per region for outbound communication, corresponding to each availability zone in the region. See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for Neon's NAT gateway IP addresses. When configuring access, be sure to open access to all of the NAT gateway IP addresses for your Neon project's region. ## Decoder plugins Neon supports both `pgoutput` and `wal2json` replication output decoder plugins. - `pgoutput`: This is the default logical replication output plugin for Postgres. Specifically, it's part of the Postgres built-in logical replication system, designed to read changes from the database's write-ahead log (WAL) and output them in a format suitable for logical replication. - `wal2json`: This is also a logical replication output plugin for Postgres, but it differs from `pgoutput` in that it converts WAL data into `JSON` format. This makes it useful for integrating Postgres with systems and applications that work with `JSON` data. For usage information, see [The wal2json plugin](https://neon.com/docs/extensions/wal2json). ## Dedicated replication slots Some data services and platforms require dedicated replication slots. You can create a dedicated replication slot using the standard PostgreSQL syntax. As mentioned above, Neon supports both `pgoutput` and `wal2json` replication output decoder plugins. ```sql SELECT pg_create_logical_replication_slot('my_replication_slot', 'pgoutput'); ``` ```sql SELECT pg_create_logical_replication_slot('my_replication_slot', 'wal2json'); ``` ## Publisher settings The `max_wal_senders` and `max_replication_slots` configuration parameter settings on Neon are set to `10`. ```text max_wal_senders = 10 max_replication_slots = 10 ``` - The `max_wal_senders` parameter defines the maximum number of concurrent WAL sender processes that are responsible for streaming WAL data to subscribers. In most cases, you should have one WAL sender process for each subscriber or replication slot to ensure efficient and consistent data replication. - The `max_replication_slots` defines the maximum number of replication slots used to manage database replication connections. Each replication slot tracks changes in the publisher database to ensure that the connected subscriber stays up to date. You'll want a replication slot for each replication connection. For example, if you expect to have 10 separate subscribers replicating from your database, you would set `max_replication_slots` to 10 to accommodate each connection. If you require different values for these parameters, please contact Neon support. ## Replicating between databases on the same Neon project branch Each branch in a Neon project has its own Postgres instance, and a Postgres instance is a database cluster, capable of supporting multiple databases. If your use case requires replicating data between two databases in the same database cluster, i.e., on the same Neon project branch, the setup is slightly different than configuring replication between separate Postgres instances. As described in the official PostgreSQL [CREATE SUBSCRIPTION Notes documentation](https://www.postgresql.org/docs/current/sql-createsubscription.html): ```text Creating a subscription that connects to the same database cluster (for example, to replicate between databases in the same cluster or to replicate within the same database) will only succeed if the replication slot is not created as part of the same command. Otherwise, the `CREATE SUBSCRIPTION` call will hang. To make this work, create the replication slot separately (using the function `pg_create_logical_replication_slot` with the plugin name `pgoutput`) and create the subscription using the parameter `create_slot = false`. This is an implementation restriction that might be lifted in a future release. ``` For example, on the publisher database, you would create the publication and the replication slot, as shown: ```sql CREATE PUBLICATION my_publication FOR TABLES , ; SELECT pg_create_logical_replication_slot('my_replication_slot', 'pgoutput'); ``` Then, on the subscriber database, you would create a subscription that references the replication slot with the `create_slot` option set to `false` and `slot_name` set to the name of the slot you created. The `connection_string` should be the connection string for the Postgres role used to connect to the publisher database. This role must have the `REPLICATION` privilege. Any Postgres role create created via the Neon Console, CLI, or API is a member of the `neon_superuser` role, which has the `REPLICATION` privilege by default. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). Be sure to select the correct role and database before copying the connection string. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'connection_string' PUBLICATION my_publication with (create_slot = false, slot_name = 'my_replication_slot'); ``` --- # Source: https://neon.com/llms/guides-logical-replication-postgres-to-neon.txt # Replicate data from Postgres to Neon > The document outlines the process for setting up logical replication to transfer data from a PostgreSQL database to Neon, detailing configuration steps and commands required for successful data synchronization. ## Source - [Replicate data from Postgres to Neon HTML](https://neon.com/docs/guides/logical-replication-postgres-to-neon): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from a local Postgres instance or another Postgres provider to Neon. If you're looking to replicate data from one Neon Postgres instance to another, see [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). ## Prerequisites - A local Postgres instance or Postgres instance hosted on another provider containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A destination Neon project. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your source Postgres database This section describes how to prepare your source Postgres database (the publisher) for replicating data to your destination Neon database (the subscriber). ### Enable logical replication in the source Neon project On your source database, enable logical replication. The typical steps for a local Postgres instance are shown below. If you run Postgres on a provider, the steps may differ. Refer to your provider's documentation. Enabling logical replication requires changing the Postgres `wal_level` configuration parameter from `replica` to `logical`. 1. Locate your `postgresql.conf` file. This is usually found in the PostgreSQL data directory. The data directory path can be identified by running the following query in your PostgreSQL database: ```sql SHOW data_directory; ``` 2. Open the `postgresql.conf` file in a text editor. Find the `wal_level` setting in the file. If it is not present, you can add it manually. Set `wal_level` to `logical` as shown below: ```ini wal_level = logical ``` 3. After saving the changes to `postgresql.conf`, you need to reload or restart PostgreSQL for the changes to take effect. 4. Confirm the change by running the following query in your PostgreSQL database: ```sql SHOW wal_level; wal_level ----------- logical ``` ### Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. For example: ```sql CREATE ROLE replication_user WITH REPLICATION LOGIN PASSWORD 'your_secure_password'; ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your Neon destination database This section describes how to prepare your destination Neon Postgres database (the subscriber) to receive replicated data. ### Prepare your database schema When configuring logical replication in Postgres, the tables in the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on the destination database. 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), `psql`, or another SQL client to connect to your destination database. 2. Create the subscription using the using a `CREATE SUBSCRIPTION` statement. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'host= port=5432 dbname=postgres user=replication_user password=replication_user_password' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source Postgres database where you defined the publication. - `publication_name`: The name of the publication you created on the source Postgres database. 3. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; ``` The subscription (`my_subscription`) should be listed, confirming that your subscription has been created successfully. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert some rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 30 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete, you can switch your application over to the destination database by swapping out your source database connection details for your destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-postgres.txt # Replicate data to an external Postgres instance > The document details the process for setting up logical replication from a Neon database to an external PostgreSQL instance, enabling data synchronization between the two systems. ## Source - [Replicate data to an external Postgres instance HTML](https://neon.com/docs/guides/logical-replication-postgres): The original HTML version of this documentation Neon's logical replication feature allows you to replicate data from Neon to external subscribers. This guide shows you how to stream data from a Neon Postgres database to an external Postgres database (a Postgres destination other than Neon). If you're looking to replicate data from one Neon Postgres instance to another, see [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). ## Prerequisites - A Neon project with a database containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - A destination Postgres instance other than Neon. - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your source Neon database This section describes how to prepare your source Neon database (the publisher) for replicating data to your destination Neon database (the subscriber). ### Enable logical replication in the source Neon project In the Neon project containing your source database, enable logical replication. You only need to perform this step on the source Neon project. **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication restarts all computes in your Neon project, meaning that active connections will be dropped and have to reconnect. To enable logical replication: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query: ```sql SHOW wal_level; wal_level ----------- logical ``` ### Create a Postgres role for replication It is recommended that you create a dedicated Postgres role for replicating data. The role must have the `REPLICATION` privilege. The default Postgres role created with your Neon project and roles created using the Neon CLI, Console, or API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the required `REPLICATION` privilege. Tab: CLI The following CLI command creates a role. To view the CLI documentation for this command, see [Neon CLI commands — roles](https://api-docs.neon.tech/reference/createprojectbranchrole) ```bash neon roles create --name replication_user ``` Tab: Console To create a role in the Neon Console: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. Select the **Roles & Databases** tab. 6. Click **Add Role**. 7. In the role creation dialog, specify a role name. 8. Click **Create**. The role is created, and you are provided with the password for the role. Tab: API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://neon.com/docs/reference/cli-roles). ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "replication_user" } }' | jq ``` ### Grant schema access to your Postgres role If your replication role does not own the schemas and tables you are replicating from, make sure to grant access. For example, the following commands grant access to all tables in the `public` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA public TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your destination database This section describes how to prepare your destination Postgres database (the subscriber) to receive replicated data. ### Prepare your database schema When configuring logical replication in Postgres, the tables in the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on the destination database. 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), `psql`, or another SQL client to connect to your destination database. 2. Create the subscription using the using a `CREATE SUBSCRIPTION` statement. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://neondb_owner:@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source Neon database where you defined the publication. - `publication_name`: The name of the publication you created on the source Neon database. 3. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; ``` The subscription (`my_subscription`) should be listed, confirming that your subscription has been created successfully. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert some rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 30 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete, you can switch your application over to the destination database by swapping out your source database connection details for your destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-prisma-pulse.txt # Stream database changes in real-time with Prisma Pulse > This document explains how to use Prisma Pulse for streaming real-time database changes in Neon, detailing the setup and configuration of logical replication to enable seamless data synchronization. ## Source - [Stream database changes in real-time with Prisma Pulse HTML](https://neon.com/docs/guides/logical-replication-prisma-pulse): The original HTML version of this documentation Neon's Logical Replication feature enables you to subscribe to changes in your database, supporting things like replication or creating event-driven functionality. [Prisma Pulse](https://www.prisma.io/data-platform/pulse?utm_source=neon&utm_medium=pulse-guide) is a fully managed, production-ready service that connects to your Neon Postgres database, and allows you to stream changes from your database in real-time, integrated closely with [Prisma ORM](https://www.prisma.io/orm?utm_source=neon&utm_medium=pulse-guide). In this guide, you will learn how to set up Prisma Pulse with your Neon database and create your first event stream. **Tip**: What can you make with database event-driven architecture? Set up real-time triggers for your Inngest workflows, re-index your TypeSense search whenever data changes, and much more. ## Prerequisites - A [Neon account](https://console.neon.tech/) - A [Prisma Data Platform account](https://pris.ly/pdp?utm_source=neon&utm_medium=pulse-guide) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Project settings**. 3. Select **Beta**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Connect Prisma Pulse 1. If you haven't already done so, create a new account or sign in on the [Prisma Data Platform](https://pris.ly/pdp?utm_source=neon&utm_medium=pulse-guide). 2. In the [Prisma Data Platform Console](https://console.prisma.io?utm_source=neon&utm_medium=pulse-guide) create a new project by clicking the **New project** button. 3. In the **New project** configuration, select **Pulse** as your starting point. 4. Copy your database connection string from Neon into the database connection input field on the Platform Console. 5. Choose a region that is closest to your Neon database. 6. Click **Create project**. 7. We recommend leaving **Event persistence** switched **on** (default). This means Prisma Pulse will automatically store events in the case your server goes down, allowing you to resume again with zero data loss. 8. Click **Enable Pulse**. 9. After Pulse has been enabled (this may take a moment), generate an API key by clicking **Generate API key**. Save this for later. ## Your first stream ### Set up your project Create a new TypeScript project with Prisma: ``` npx try-prisma -t typescript/starter ``` If you already have a TypeScript project with Prisma client installed, you can skip this. ### From the root of your project, install the Pulse extension ```bash npm install @prisma/extension-pulse@latest ``` ### Extend your Prisma Client instance with the Pulse extension Add the following to extend your existing Prisma Client instance with the Prisma Pulse extension. Don't forget to insert your own API key. ```tsx import { PrismaClient } from '@prisma/client'; import { withPulse } from '@prisma/extension-pulse'; const prisma = new PrismaClient().$extends(withPulse({ apiKey: '' })); ``` **Note**: For a real production use case, you should consider moving sensitive values like your API key into environment variables. ### Create your first Pulse stream The code below subscribes to a `User` model in your Prisma schema. You can use a similar approach to subscribe to any model that exists in your project. ```tsx import { PrismaClient } from '@prisma/client'; import { withPulse } from '@prisma/extension-pulse'; const prisma = new PrismaClient().$extends(withPulse({ apiKey: '' })); async function main() { // Create a stream from the 'User' model const stream = await prisma.user.stream({ name: 'user-stream' }); for await (const event of stream) { console.log('Just received an event:', event); } } main(); ``` ### Trigger a database change You can use Prisma Studio to easily make changes in your database, to trigger events. Open Prisma Studio by running: `npx prisma studio` After making a change in Studio, you should see messages appearing in your terminal like this: ```bash Just received an event: { action: 'create', created: { id: 'clzvgzq4b0d016s28yluse9r1', name: 'Polly Pulse', age: 35 }, id: '01J5BCFR8F8DBJDXAQ5YJPZ6VY', modelName: 'User' } ``` ## What's next? - [Set up real-time triggers for your Inngest workflows](https://pris.ly/pulse-inngest-router?utm_source=neon&utm_medium=pulse-guide) - [Re-index your TypeSense search instantly when data changes](https://pris.ly/pulse-typesense?utm_source=neon&utm_medium=pulse-guide) - [Automatically send onboarding emails with Resend when a new user is created](https://pris.ly/pulse-resend?utm_source=neon&utm_medium=pulse-guide) --- # Source: https://neon.com/llms/guides-logical-replication-rds-to-neon.txt # Replicate data from Amazon RDS Postgres > The document outlines the process for setting up logical replication from Amazon RDS Postgres to Neon, detailing the necessary configurations and steps to ensure data synchronization between the two platforms. ## Source - [Replicate data from Amazon RDS Postgres HTML](https://neon.com/docs/guides/logical-replication-rds-to-neon): The original HTML version of this documentation **Note** New feature: If you are looking to migrate your database to Neon, you may want to try our new **Migration Assistant**, which can help. Read the [guide](https://neon.com/docs/import/migration-assistant) to learn more. Neon's logical replication feature allows you to replicate data from Amazon RDS PostgreSQL to Neon. ## Prerequisites - A source database in Amazon RDS for PostgreSQL containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A destination Neon project. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your source database This section describes how to prepare your source Amazon RDS Postgres instance (the publisher) for replicating data to Neon. ### Enable logical replication in the source Amazon RDS PostgreSQL instance Enabling logical replication in Postgres requires changing the `wal_level` configuration parameter from `replica` to `logical`. Before you begin, you can check your current setting with the following command: ```bash SHOW wal_level; wal_level ----------- replica (1 row) ``` **Note**: For information about connecting to RDS from `psql`, see [Connect to a PostgreSQL DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html#CHAP_GettingStarted.Connecting.PostgreSQL). If your current setting is `replica`, follow these steps to enable logical replication. If you are using the default parameter group, you will need to create a new parameter group to set the value. You can do so by selecting **Parameter groups** > **Create parameter group** from the sidebar and filling in the required fields. To enable logical replication: 1. Navigate to the **Configuration** tab of your RDS instance. 2. Under the **Configuration** heading, click on the **DB instance parameter group** link. 3. Click **Edit**. In the **Filter parameters** search field, search for `rds.logical_replication`. 4. Set the value to `1`, and click **Save Changes**. 5. If you created a new parameter group, navigate back to your RDS instance page, click **Modify**, and scroll down to select your new parameter group. Click **Continue**, and select **Apply immediately** to make the change now, then click **Modify DB instance**. 6. Reboot your instance to apply the new setting. From the **Actions** menu for your database, select **Reboot**. 7. Make sure that the `wal_level` parameter is now set to `logical`: ```sql SHOW wal_level; wal_level ----------- logical (1 row) ``` ### Allow connections from Neon You need to allow inbound connections to your AWS RDS Postgres instance from Neon. You can do this by editing your instance's **CIDR/IP - Inbound** security group, which you can find a link to from your AWS RDS Postgres instance page. 1. Click on the security group name. 2. Click on the security group ID. 3. From the **Actions** menu, select **Edit inbound rules**. 4. Add rules that allow traffic from each of the IP addresses for your Neon project's region. Neon uses 3 to 6 IP addresses per region for outbound communication, corresponding to each availability zone in the region. See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for Neon's NAT gateway IP addresses. 5. When you're finished, click **Save rules**. **Note**: You can specify a rule for `0.0.0.0/0` to allow traffic from any IP address. However, this configuration is not considered secure. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. To create a publication for a specific table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your destination database This section describes how to prepare your source Neon Postgres database (the subscriber) to receive replicated data from your AWS RDS Postgres instance. ### Prepare your database schema When configuring logical replication in Postgres, the tables in the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on your Neon destination database. 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), `psql`, or another SQL client to connect to your destination database. 2. Create the subscription using the using a `CREATE SUBSCRIPTION` statement. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://postgres:password@database-1.czmwaio8k05k.us-east-2.rds.amazonaws.com/postgres' PUBLICATION my_publication; ``` - `subscription_name`: A name you chose for the subscription. - `connection_string`: The connection string for the source AWS RDS Postgres database where you defined the publication. - `publication_name`: The name of the publication you created on the source AWS RDS Postgres database. 3. Verify the subscription was created by running the following command: ```sql SELECT * FROM pg_stat_subscription; subid | subname | pid | leader_pid | relid | received_lsn | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end_time ------+-----------------+------+------------+-------+--------------+-------------------------------+-------------------------------+----------------+------------------------------- 16471 | my_subscription | 1080 | | | 0/300003A0 | 2024-08-13 20:25:08.011501+00 | 2024-08-13 20:25:08.013521+00 | 0/300003A0 | 2024-08-13 20:25:08.011501+00 ``` The subscription (`my_subscription`) should be listed, confirming that your subscription was created. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes). If you're using the `playing_with_neon` database, you can use this statement to insert some rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on the source and destination databases to make sure the result matches. ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 30 (1 row) ``` Alternatively, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is as expected. For example, if you just ran an insert option on the publisher, the `last_msg_receipt_time` should reflect the time of that operation. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete, you can switch your application over to the destination database by swapping out your AWS RDS source database connection details for your Neon destination database connection details. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-logical-replication-schema-changes.txt # Managing schema changes in a logical replication setup > The document outlines procedures for managing schema changes in a logical replication setup within Neon, detailing steps to ensure consistency and minimize disruptions during schema modifications. ## Source - [Managing schema changes in a logical replication setup HTML](https://neon.com/docs/guides/logical-replication-schema-changes): The original HTML version of this documentation When working with Postgres logical replication, managing schema changes is a task that requires careful planning. As stated in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-restrictions.html): "_The database schema and DDL commands are not replicated. The initial schema can be copied by hand using `pg_dump --schema-only`. Subsequent schema changes would need to be kept in sync manually. (Note, however, that there is no need for the schemas to be absolutely the same on both sides.) Logical replication is robust when schema definitions change in a live database: When the schema is changed on the publisher and replicated data starts arriving at the subscriber but does not fit into the table schema, replication will error until the schema is updated. In many cases, intermittent errors can be avoided by applying additive schema changes to the subscriber first._" This guidelines below outline some recommended practices for handling schema changes in a logical replication setup. ## Schema management in a logical replication context Logical replication in Postgres is designed to replicate data changes (inserts, updates, and deletes) but not schema changes (DDL commands). This means that any modifications to the database schema, such as adding or dropping columns, need to be manually applied to both the publisher and the subscriber databases. Since the schemas do not need to be exactly the same on both sides, you have some flexibility. However, inconsistencies in the schema can lead to replication errors if the subscriber cannot accommodate incoming data due to schema mismatches. To ensure that schema changes are successful, we recommend the following practices: ### 1. Apply additive schema changes to the subscriber first Additive changes, such as adding a new column or creating an index, should be applied to the subscriber before they are applied to the publisher. This approach ensures that when the new data is replicated from the publisher, the subscriber is already prepared to handle it. For example: - **Add a new column on the subscriber:** ```sql ALTER TABLE your_table_name ADD COLUMN new_column_name data_type; ``` - **Add the same column on the publisher:** ```sql ALTER TABLE your_table_name ADD COLUMN new_column_name data_type; ``` Applying additive schema changes in this order will help prevent replication errors caused by the subscriber not recognizing the additive change in the incoming data. ### 2. Handle non-additive schema changes with caution Non-additive changes, such as dropping a column or altering a column's data type, require careful handling. When performing a non-additive schema change like dropping a column, apply the change on the publisher first, then on the subscriber. Non-additive changes are often feasible if applied in the correct order (publisher first). However, always carefully assess how the schema change will impact replication to the subscriber. Will writes still succeed on the subscriber after the change on the publisher? It's best practice to test schema changes before implementing them in production. For example, test whether writes to the modified publisher schema still execute successfully on the unmodified subscriber schema. Mistakes in the schema update process could disrupt replication on the subscriber, requiring troubleshooting and reestablishing replication. For an added degree of safety or for complex schema changes, consider temporarily pausing write activity on the publisher before applying schema changes. The steps for this approach include: - **Pausing writes on the publisher:** Pause writes on the publisher by stopping or pausing the application that handles inserts, updates, and deletes, or by revoking write permissions on the roles that write to the database. Other methods may also be available depending on your environment. - **Applying schema changes on the publisher:** Apply the necessary schema changes to the publisher. - **Applying schema changes on the subscriber:** Once the publisher changes are complete, apply the schema changes to the subscriber. - **Resuming writes:** After verifying that the changes are successful, resume normal write operations. ### 3. Monitor and verify replication After applying schema changes and resuming writes, verify that data is being replicated between the publisher and subscriber. To do this, you can run the following query on the subscriber to make sure the `last_msg_receipt_time` is recent: ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` You can also perform a row count on the publisher and subscriber databases to make sure results are as expected. If you're actively adding rows, the results may be close but not exactly the same. ```sql SELECT COUNT(*) FROM your_table_name; ``` ## Schema migration tools Tools like [Flyway](https://flywaydb.org/) and [Liquibase](https://www.liquibase.org/) can assist in managing schema changes by ensuring they are applied consistently across multiple databases. These tools track the history of each change and ensure updates are applied in the correct sequence. Integrating these tools into your workflow can improve the reliability and organization of your schema migrations, but may require adjustments to your existing process. If you're unfamiliar with these tools, check out the following guides to get started with Neon: - [Get started with Flyway and Neon](https://neon.com/docs/guides/flyway) - [Get started with Liquibase and Neon](https://neon.com/docs/guides/liquibase) For guidance on managing schemas across multiple databases using Flyway or Liquibase, see: - [Flyway: A simple way to manage multiple environment deployments](https://www.red-gate.com/blog/a-simple-way-to-manage-multi-environment-deployments) - [How to set up Liquibase with an Existing Project and Multiple Environments](https://docs.liquibase.com/workflows/liquibase-community/existing-project.html) Some Object Relational Mappers (ORMs) also support managing schemas across multiple database environments. For example, with Prisma ORM, you can configure multiple `.env` files. Learn more at [Using multiple .env files](https://www.prisma.io/docs/orm/more/development-environment/environment-variables#using-multiple-env-files). Regardless of the schema management tool you choose, ensure that changes adhere to the guidelines for [additive](https://neon.com/docs/guides/logical-replication-schema-changes#1-apply-additive-schema-changes-to-the-subscriber-first) and [non-additive](https://neon.com/docs/guides/logical-replication-schema-changes#2-handle-non-additive-schema-changes-with-caution) schema changes. If you have suggestions, tips, or requests regarding schema management in a replication setup, please let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or through our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. ## References - [PostgreSQL logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) - [Import a database schema](https://neon.com/docs/import/import-schema-only) --- # Source: https://neon.com/llms/guides-logical-replication-supabase-to-neon.txt # Replicate data from Supabase > The document guides users on setting up logical replication to transfer data from Supabase to Neon, detailing the necessary steps and configurations for successful data synchronization between the two platforms. ## Source - [Replicate data from Supabase HTML](https://neon.com/docs/guides/logical-replication-supabase-to-neon): The original HTML version of this documentation This guide describes how to replicate data from Supabase to Neon using native Postgres logical replication. The steps in this guide follow those described in [Replicate to another Postgres database using Logical Replication](https://supabase.com/docs/guides/database/postgres/setup-replication-external), in the _Supabase documentation_. ## Prerequisites - A Supabase project with a Postgres database containing the data you want to replicate. If you're just testing this out and need some data to play with, you can use the following statements in your Supabase SQL Editor to create a table with sample data: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` - A Neon project with a Postgres database to receive the replicated data. For information about creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your Supabase source database This section describes how to prepare your source Supabase Postgres instance (the publisher) for replicating data to Neon. ### Enable logical replication Logical replication is enabled by default in Supabase. You can verify that `wal_level` is set to `logical` by running the following query in your Supabase SQL Editor or using `psql` connected to your Supabase database: ```sql SHOW wal_level; ``` The output should be: ```text wal_level ----------- logical (1 row) ``` If `wal_level` is not `logical`, contact Supabase support to enable it. ### Allow connections from Neon You need to allow inbound connections to your Supabase Postgres database from the Neon NAT Gateway IP addresses. This allows Neon to connect to your Supabase database for logical replication. Follow these steps to configure network restrictions in Supabase: 1. **Obtain Neon NAT Gateway IP Addresses**: See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for the IP addresses for your Neon project's region. You will need to allow connections from these IP addresses in your Supabase project. 2. **Configure Network Restrictions in Supabase**: - Go to your Supabase project dashboard. - Navigate to **Project Settings** > **Database** > **Network restrictions**. - Ensure you have **Owner** or **Admin** permissions for the Supabase project to configure network restrictions. - Add inbound rules to allow connections from the Neon NAT Gateway IP addresses you obtained in the previous step. Add each IP address individually. ### Obtain a direct connection string Logical replication requires a direct connection string, not a pooled connection string. 1. **Enable IPv4 Add-on**: In your Supabase project dashboard, navigate to **Project Settings** > **Add-ons**. Enable the **IPv4** add-on. This add-on is required to obtain a direct IPv4 connection string. Note that this add-on might incur extra costs. 2. **Get the Direct Connection String**: After enabling the IPv4 add-on, copy the direct connection string from the **Connect** button in the Navigation bar of your Supabase dashboard. This connection string is required to create a subscription in Neon. **Warning**: Avoid using pooled connection strings (Transaction and session poolers) for logical replication. Use the direct connection string obtained after enabling the IPv4 add-on. ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. You can run the following SQL statements in your Supabase SQL Editor or using [psql](https://neon.com/docs/connect/query-with-psql-editor) to create a publication for the tables you want to replicate. - To create a publication for a specific table, use the `CREATE PUBLICATION` statement. For example, to create a publication for the `playing_with_neon` table: ```sql CREATE PUBLICATION my_publication FOR TABLE playing_with_neon; ``` - To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ## Prepare your Neon destination database This section describes how to prepare your Neon Postgres database (the subscriber) to receive replicated data from your Supabase Postgres instance. ### Prepare your database schema When configuring logical replication in Postgres, the tables defined in your publication on the source database you are replicating from must also exist in the destination database, and they must have the same table names and columns. You can create the tables manually in your destination database or use utilities like `pg_dump` and `pg_restore` to dump the schema from your source database and load it to your destination database. See [Import a database schema](https://neon.com/docs/import/import-schema-only) for instructions. If you're using the sample `playing_with_neon` table, you can create the same table on the destination database with the following statement in your Neon SQL Editor or using `psql`: ```sql CREATE TABLE IF NOT EXISTS playing_with_neon(id SERIAL PRIMARY KEY, name TEXT NOT NULL, value REAL); ``` ### Create a subscription After creating a publication on the source database, you need to create a subscription on your Neon destination database. 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), `psql`, or another SQL client to connect to your Neon database. 2. Create the subscription using the `CREATE SUBSCRIPTION` statement. Use the **direct connection string** you obtained from Supabase in the previous steps. ```sql CREATE SUBSCRIPTION my_subscription CONNECTION 'postgresql://' PUBLICATION my_publication; ``` Replace the following placeholders in the statement: - `my_subscription`: A name you chose for the subscription. - `postgresql://`: The **direct connection string** for your Supabase database, obtained with the IPv4 add-on enabled. - `my_publication`: The name of the publication you created on the Supabase database. 3. Verify that the subscription was created in Neon by running the following query: ```sql SELECT * FROM pg_stat_subscription; subid | subname | worker_type | pid | leader_pid | relid | received_lsn | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end_time --------|-----------------|-------------|------|------------|-------|--------------|-------------------------------|------------------------------|----------------|------------------------------- 216502 | my_subscription | apply | 1069 | | | 0/75B1000 | 2025-02-11 10:00:04.142994+00 | 2025-02-11 10:00:04.14277+00 | 0/75B1000 | 2025-02-11 10:00:04.142994+00 ``` The subscription (`my_subscription`) should be listed, confirming that your subscription was created successfully. **Note**: **Replication Slots Limits**: Supabase has limits on `max_replication_slots` and `max_wal_senders` which vary based on your Supabase instance size/plan. If you encounter issues, you might need to upgrade your Supabase instance to perform logical replication, especially for larger datasets or multiple replication slots. Check [Supabase documentation](https://supabase.com/docs/guides/platform/compute-and-disk#limits-and-constraints) for the limits on your instance size. ## Test the replication Testing your logical replication setup ensures that data is being replicated correctly from the publisher to the subscriber database. 1. Run some data modifying queries on the source database (inserts, updates, or deletes) in your Supabase SQL Editor or using `psql`. If you're using the `playing_with_neon` database, you can use this statement to insert 10 rows: ```sql INSERT INTO playing_with_neon(name, value) SELECT LEFT(md5(i::TEXT), 10), random() FROM generate_series(1, 10) s(i); ``` 2. Perform a row count on both the Supabase source and Neon destination databases to make sure the result matches. In both databases, run: ```sql SELECT COUNT(*) FROM playing_with_neon; count ------- 20 (1 row) ``` The count should be the same in both databases, reflecting the newly inserted rows. Alternatively, you can run the following query on the subscriber (Neon) to make sure the `last_msg_receipt_time` is updated and as expected. ```sql SELECT subname, received_lsn, latest_end_lsn, last_msg_receipt_time FROM pg_catalog.pg_stat_subscription; ``` ## Switch over your application After the replication operation is complete and you have verified that data is being replicated correctly, you can switch your application over to the Neon database. 1. Stop writes to your Supabase database. 2. Wait for any final transactions to be replicated to Neon. Monitor `pg_stat_subscription` in Neon until `received_lsn` and `latest_end_lsn` are close or equal, indicating minimal replication lag. 3. Update your application's connection string to point to your Neon database. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For details, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Reference For more information about logical replication and Postgres client utilities, refer to the following topics in the Postgres and Neon documentation: - [Postgres - Logical replication](https://www.postgresql.org/docs/current/logical-replication.html) - [Neon logical replication guide](https://neon.com/docs/guides/logical-replication-guide) - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [psql](https://www.postgresql.org/docs/current/app-psql.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) --- # Source: https://neon.com/llms/guides-logical-replication-tips.txt # Logical replication tips > The document "Logical replication tips" offers guidance on configuring and optimizing logical replication in Neon, focusing on setup, performance tuning, and troubleshooting common issues. ## Source - [Logical replication tips HTML](https://neon.com/docs/guides/logical-replication-tips): The original HTML version of this documentation The following tips are based on actual customer data migrations to Neon using logical replication: - Initial data copying during logical replication can significantly increase the load on both the publisher and subscriber. For large data migrations, consider increasing compute resources (CPU and RAM) for the initial copy. On Neon, you can do this by [enabling autoscaling](https://neon.com/docs/guides/autoscaling-guide) and selecting a larger maximum compute size. The publisher (source database instance) typically experiences higher load, as it serves other requests while the subscriber only receives replicated data. - For large datasets, avoid creating indexes when setting up the schema on the destination database (subscriber) to reduce the initial data load time. Indexes can be added back after the data copy is complete. - If you encounter replication timeout errors, consider increasing `wal_sender_timeout` on the publisher and `wal_receiver_timeout` on the subscriber to a higher value, such as 5 minutes (default is 1 minute). On Neon, adjusting these settings requires assistance from [Neon Support](https://neon.com/docs/introduction/support). - To minimize storage consumption during data replication to Neon, reduce your [restore window](https://neon.com/docs/introduction/branching#restore-window) setting. For example, set it to 1 hour or 0 during the initial copy, and restore it to the desired value afterward. - Ensure that any Postgres extensions that you depend on are also supported by Neon. For extensions and extension versions supported by Neon, see [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions). If you find that support is missing for a particular extension or extension version that would prevent you from migrating your data to Neon, please reach out to [Neon Support](https://neon.com/docs/introduction/support). - Avoid defining publications with `FOR ALL TABLES` if you want to add or drop tables from the publication later. It is not possible to add or drop tables from a publication defined with `FOR ALL TABLES`. ```sql ALTER PUBLICATION test_publication ADD TABLE users; ERROR: publication "my_publication" is defined as FOR ALL TABLES DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications. ALTER PUBLICATION test_publication DROP TABLE products; ERROR: publication "my_publication" is defined as FOR ALL TABLES DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications. ``` Instead, you can create a publication for a specific table using the following syntax: ```sql CREATE PUBLICATION my_publication FOR TABLE users; ``` To create a publication for multiple tables, specify a comma-separated list of tables: ```sql CREATE PUBLICATION my_publication FOR TABLE users, departments; ``` For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. If you have logical replication or data migration tips you would like to share, please let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/guides-manage-database-access.txt # Manage database access > The document outlines procedures for managing database access in Neon, detailing how to configure user roles, permissions, and authentication methods to control and secure database interactions. ## Source - [Manage database access HTML](https://neon.com/docs/guides/manage-database-access): The original HTML version of this documentation Each Neon project is created with a Postgres role that is named for your database. For example, if your database is named `neondb`, the project is created with a default role named `neondb_owner`. --- # Source: https://neon.com/llms/guides-micronaut-kotlin.txt # Connect a Micronaut Kotlin application to Neon Postgres > The document guides users on connecting a Micronaut Kotlin application to a Postgres database hosted on Neon, detailing the configuration steps and necessary code snippets for seamless integration. ## Source - [Connect a Micronaut Kotlin application to Neon Postgres HTML](https://neon.com/docs/guides/micronaut-kotlin): The original HTML version of this documentation [Micronaut](https://micronaut.io/) is a modern, JVM-based, full-stack framework for building modular, easily testable microservice and serverless applications. This guide describes how to create a Neon Postgres database and connect to it from a Micronaut Kotlin application. The final application will expose REST endpoints to perform CRUD (Create, Read, Update, Delete) operations on a `book` table in your Neon database. To create a Neon project and access it from a Micronaut Kotlin application, you will: 1. [Create a Neon project](https://neon.com/docs/guides/micronaut-kotlin#create-a-neon-project) 2. [Create a Micronaut Kotlin project](https://neon.com/docs/guides/micronaut-kotlin#create-a-micronaut-kotlin-project) 3. [Configure your database connection](https://neon.com/docs/guides/micronaut-kotlin#configure-your-database-connection) 4. [Build the application components](https://neon.com/docs/guides/micronaut-kotlin#build-the-application-components) 5. [Run and test the application](https://neon.com/docs/guides/micronaut-kotlin#run-and-test-the-application) ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Save your connection details. You will need them in a later step. ## Create a Micronaut Kotlin project You can create a new Micronaut project using either the Micronaut CLI or the [Micronaut Launch](https://launch.micronaut.io/) website. For this guide, we will use the Micronaut CLI. > Install the Micronaut CLI by following the instructions in the [Micronaut documentation](https://micronaut.io/download/). You also need to have JDK 21 installed on your machine. Run the following command in your terminal. This command creates a new application and includes features for PostgreSQL connectivity, JDBC connection pooling (Hikari), database migrations (Flyway), data access, and YAML configuration. ```bash mn create-app with-micronaut-kotlin \ --lang=kotlin \ --jdk=21 \ --features=postgres,jdbc-hikari,flyway,data-jdbc,yaml ``` After creating the project, open the `build.gradle.kts` file and add the following configuration inside the `kotlin` block to ensure compatibility with JDK 21 and prevent potential build errors: ```kotlin // build.gradle.kts kotlin { jvmToolchain(21) } ``` ## Configure your database connection The project creation process generated a configuration file at `src/main/resources/application.yml`. You need to edit this file to add your Neon database credentials. Add the `url`, `username`, and `password` fields under the `datasources.default` section. Your updated `application.yml` file should look like this: ```yaml {8-10} # src/main/resources/application.yml micronaut: application: name: with-micronaut-kotlin datasources: default: url: 'jdbc:postgresql:///?sslmode=require&channelBinding=require' username: '' password: '' driver-class-name: org.postgresql.Driver db-type: postgres dialect: POSTGRES flyway: datasources: default: enabled: true ``` > Replace ``, ``, ``, and `` with your actual Neon database connection details you saved earlier. ## Build the application components Now you can create the components for a simple book inventory API: an entity, a repository, a controller, and a database migration script. ### 1. Create the database schema with Flyway Flyway handles database migrations automatically when the application starts. Create a SQL file at `src/main/resources/db/migration/V1__create_book_table.sql` to define your table schema and add some initial data. ```sql -- src/main/resources/db/migration/V1__create_book_table.sql CREATE TABLE IF NOT EXISTS book ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255) NOT NULL ); INSERT INTO book (title, author) VALUES ('The Hobbit', 'J.R.R. Tolkien'); INSERT INTO book (title, author) VALUES ('1984', 'George Orwell'); ``` ### 2. Create the entity Create a data class that maps to the `book` table. The `@Serdeable` annotations are required for Micronaut to handle JSON serialization and deserialization for your API. ```kotlin // src/main/kotlin/com/example/entity/Book.kt package com.example.entity import io.micronaut.data.annotation.GeneratedValue import io.micronaut.data.annotation.Id import io.micronaut.data.annotation.MappedEntity import io.micronaut.serde.annotation.Serdeable @MappedEntity @Serdeable data class Book( @field:Id @field:GeneratedValue var id: Long? = null, var title: String, var author: String ) ``` ### 3. Create the repository Create a repository interface that extends `CrudRepository`. This interface provides CRUD operations for the `Book` entity. ```kotlin // src/main/kotlin/com/example/repository/BookRepository.kt package com.example.repository import com.example.entity.Book import io.micronaut.data.jdbc.annotation.JdbcRepository import io.micronaut.data.model.query.builder.sql.Dialect import io.micronaut.data.repository.CrudRepository @JdbcRepository(dialect = Dialect.POSTGRES) interface BookRepository : CrudRepository ``` ### 4. Create the controller Finally, create a controller to expose the REST endpoints for interacting with the books. ```kotlin // src/main/kotlin/com/example/controller/BookController.kt package com.example.controller import com.example.entity.Book import com.example.repository.BookRepository import io.micronaut.http.annotation.* import io.micronaut.scheduling.TaskExecutors import io.micronaut.scheduling.annotation.ExecuteOn @Controller("/books") class BookController(private val bookRepository: BookRepository) { @Get @ExecuteOn(TaskExecutors.IO) fun getAll(): List = bookRepository.findAll().toList() @Get("/{id}") @ExecuteOn(TaskExecutors.IO) fun getById(id: Long): Book? = bookRepository.findById(id).orElse(null) @Post @ExecuteOn(TaskExecutors.IO) fun save(@Body book: Book): Book = bookRepository.save(book) } ``` ## Run and test the application You are now ready to run your application. 1. Start the application using the Gradle wrapper: ```bash ./gradlew run ``` You should see output similar to the following: ```bash $ ./gradlew run [test-resources-service] 15:48:33.940 [main] INFO i.m.c.DefaultApplicationContext$RuntimeConfiguredEnvironment - Established active environments: [test] > Task :run __ __ _ _ | \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_ | |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __| | | | | | (__| | | (_) | | | | (_| | |_| | |_ |_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__| 15:48:43.830 [main] INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting... 15:48:45.974 [main] INFO com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@30506c0d 15:48:45.975 [main] INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Start completed. 15:48:46.126 [main] INFO i.m.flyway.AbstractFlywayMigration - Running migrations for database with qualifier [default] 15:48:46.298 [main] INFO org.flywaydb.core.FlywayExecutor - Database: jdbc:postgresql://endpoint.neon.tech/examples?sslmode=require&channelBinding=require (PostgreSQL 17.5) 15:48:48.110 [main] INFO o.f.c.i.s.JdbcTableSchemaHistory - Schema history table "public"."flyway_schema_history" does not exist yetn 15:48:48.250 [main] INFO o.f.core.internal.command.DbValidate - Successfully validated 1 migration (execution time 00:00.432s) 15:48:49.524 [main] INFO o.f.c.i.s.JdbcTableSchemaHistory - Creating Schema History table "public"."flyway_schema_history" ... 15:48:51.817 [main] INFO o.f.core.internal.command.DbMigrate - Current version of schema "public": << Empty Schema >> 15:48:52.243 [main] INFO o.f.core.internal.command.DbMigrate - Migrating schema "public" to version "1 - create book table" 15:48:54.757 [main] INFO o.f.core.internal.command.DbMigrate - Successfully applied 1 migration to schema "public", now at version v1 (execution time 00:00.969s) 15:48:55.841 [main] INFO io.micronaut.runtime.Micronaut - Startup completed in 12788ms. Server Running: http://localhost:8080 :run <============-> 92% EXECUTING [38s] > :run > IDLE ``` The logs indicate the following sequence of events: - HikariCP initializes the connection pool to the Neon Postgres database. - Flyway checks the database schema and found that the `flyway_schema_history` table does not exist. - Flyway creates the `flyway_schema_history` table and applies the migrations present in the migration folder. - The `book` table is created as per the migration script (i.e., `V1__create_book_table.sql`). - The application starts successfully and is ready to handle requests. Now with the application running, you can test the API endpoints. 2. Test the API endpoints using `curl` or any API client: ```bash # Get all books curl http://localhost:8080/books # Expected Output: [{"id":1,"title":"The Hobbit","author":"J.R.R. Tolkien"},{"id":2,"title":"1984","author":"George Orwell"}] # Get a specific book by ID curl http://localhost:8080/books/1 # Expected Output: {"id":1,"title":"The Hobbit","author":"J.R.R. Tolkien"} # Create a new book curl -X POST \ -H "Content-Type: application/json" \ -d '{"title":"The Great Gatsby","author":"F. Scott Fitzgerald"}' \ http://localhost:8080/books # Expected Output: {"id":3,"title":"The Great Gatsby","author":"F. Scott Fitzgerald"} ``` You have successfully connected a Micronaut Kotlin application to your Neon Postgres database! ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Micronaut Kotlin and Neon](https://github.com/neondatabase/examples/tree/main/with-micronaut-kotlin) ## Resources - [Micronaut Documentation](https://docs.micronaut.io/) - [Micronaut API Reference](https://docs.micronaut.io/4.10.7/api/) - [Micronaut Schema Migration with Flyway](https://guides.micronaut.io/latest/micronaut-flyway-maven-java.html) - [Micronaut Data JDBC documentation](https://micronaut-projects.github.io/micronaut-data/latest/guide/index.html#jdbc) - [Micronaut Hikari JDBC Connection Pool documentation](https://micronaut-projects.github.io/micronaut-sql/latest/guide/index.html#jdbc) - [Flyway](https://www.red-gate.com/products/flyway/community/) --- # Source: https://neon.com/llms/guides-multitenancy.txt # Multitenancy with Neon > The "Multitenancy with Neon" documentation explains how to implement and manage multitenancy in Neon, detailing configuration steps and best practices for efficiently handling multiple tenants within a single database instance. ## Source - [Multitenancy with Neon HTML](https://neon.com/docs/guides/multitenancy): The original HTML version of this documentation With its serverless and API-first nature, Neon is an excellent choice for building database-per-user applications (or apps where each user/customer has their own Postgres database). Neon is particularly well-suited for architectures that prioritize maximum database isolation, achieving the equivalent of instance-level isolation. This guide will help you get started with implementing this architecture. ## Multitenant architectures in Postgres In a multitenant architecture, a single system supports multiple users (tenants), each with access to manage their own data. In a database like Postgres, this setup requires careful structuring to keep each tenant's data private, secure, and isolated—all while remaining efficient to manage and scale. Following these principles, there are three primary routes you could follow to implement multitenant architectures in Postgres: - Creating one separate database per user (the focus of this guide); - Creating one schema-per-user, within the same database; - And keeping your tenants separate within a shared schema. To better situate our use case, let's briefly outline the differences between these architectures: ### Database-per-user In a database-per-user design, each user's data is fully isolated in its own database, eliminating any risk of data overlap. This setup is straightforward to design and highly secure. However, implementing this in managed Postgres databases has traditionally been challenging. For users of AWS RDS or similar services, two primary options have existed for achieving a database-per-user design: 1. **Using one large instance to host multiple user databases.** This option can be tempting due to the reduced number of instances to manage and (probably) lower infrastructure costs. But the trade-off is a higher demand for DBA expertise—this is a design that requires careful planning, especially at scale. Hosting all users on shared resources can impact performance, particularly if users have varying workload patterns, and if the instance fails, all customers are affected. Migrations and upgrades also become complex. 2. **Handling multiple instances, each hosting a single production database.** In this scenario, each instance scales independently, preventing resource competition between users and minimizing the risk of widespread failures. This is a much simpler design from the perspective of the database layer, but managing hundreds of instances in AWS can get very costly and complex. As the number of instances grows into the thousands, management becomes nearly impossible. As we'll see later throughout this guide, Neon offers a third alternative by providing a logical equivalent to the instance-per-customer model with near-infinite scalability, without the heavy DevOps overhead. This solution involves creating one Neon project per customer. ### Schema-per-user But before focusing on database-per-user, let's briefly cover another multitenancy approach in Postgres: the schema-per-user model. Instead of isolating data by database, this design places all users in a single database, with a unique schema for each. In Neon, we generally don't recommend this approach for SaaS applications, unless this is a design you're already experienced with. This approach doesn't reduce operational complexity or costs if compared to the many-databases approach, but it does introduce additional risks; it also limits the potential of Neon features like instant Point-in-Time Recovery (PITR), which in a project-per-customer model allows you to restore customer databases independently without impacting the entire fleet's operations. More about this later. ### Shared schema Lastly, Postgres's robustness actually makes it possible to ensure tenant isolation within a shared schema. In this model, all users' data resides within the same tables, with isolation enforced through foreign keys and row-level security. While this is a common choice—and can be a good starting point if you're just beginning to build your app—we still recommend the project-per-user route if possible. Over time, as your app scales, meeting requirements within a shared schema setup becomes increasingly challenging. Enforcing compliance and managing access restrictions at the schema level grows more complex as you add more users. You'll also need to manage very large Postgres tables, as all customer data is stored in the same tables. As these tables grow, additional Postgres fine-tuning will be required to maintain performance. ## Setting up Neon for Database-per-user Now that we've reviewed your options, let's focus on the design choice we recommend for multitenancy in Neon: creating isolated databases for each user, with each database hosted on its own project. ### Database-per-user = Project-per-user We recommend setting up one project per user, rather than, for example, using a branch per customer. A Neon [project](https://neon.com/docs/manage/overview) serves as the logical equivalent of an "instance" but without the management overhead. Here's why we suggest this design: - **Straightforward scalability** Instead of learning how to handle large Postgres databases, this model allows you to simply create a new project when a user joins—something that can be handled automatically via the Neon API. This approach is very cost-effective, as we'll see below. Databases remain small, keeping management at the database level simple. - **Better performance with lower costs** This design is also highly efficient in terms of compute usage. Each project has its own dedicated compute, which scales up and down independently per customer; a spike in usage for one tenant doesn't affect others, and inactive projects remain practically free. - **Complete data isolation** By creating a dedicated project for each customer, their data remains completely separate from others, ensuring the highest level of security and privacy. - **Easier regional compliance** Each Neon project can be deployed in a specific region, making it easy to host customer data closer to their location. - **Per-customer PITR** Setting up a project per customer allows you to run [PITR on individual customers](https://neon.com/docs/guides/branch-restore) instantly, without risking disruption to your entire fleet. ## Managing many projects As you scale, following a project-per-user design means eventually managing thousands of Neon projects. This might sound overwhelming, but it's much simpler in practice than it seems—some Neon users [manage hundreds of thousands of projects](https://neon.com/blog/how-retool-uses-retool-and-the-neon-api-to-manage-300k-postgres-databases) with just one engineer. Here's why that's possible: - **You can manage everything with the Neon API** The API allows you to automate every step of project management, including setting resource limits per customer and configuring resources. - **No infrastructure provisioning** New Neon projects are ready in milliseconds. You can set things up to create new projects instantly when new customers join, without the need to manually pre-provision instances. - **You only pay for active projects** Empty projects are virtually free thanks to Neon's [scale-to-zero](https://neon.com/docs/guides/auto-suspend-guide) feature. If, on a given day, you have a few hundred projects that were only active for a few minutes, that's fine—your bill won't suffer. - **Subscription plans** To support this usage pattern, our paid plans include a generous number of projects. ### Dev/test environments In Neon, [database branching](https://neon.com/docs/introduction/branching) is a powerful feature that enables you to create fast, isolated copies of your data for development and testing. You can use child branches as ephemeral environments that mirror your main testing database but operate independently, without adding to storage costs. This feature is a game-changer for dev/test workflows, as it reduces the complexity of managing multiple test databases while lowering non-prod costs significantly. To handle [dev/test](https://neon.com/use-cases/dev-test) in a project-per-user design, consider creating a dedicated Neon project as your non-prod environment. This Neon project can serve as a substitute for the numerous non-prod instances you might maintain in RDS. The methodology: - **Within the non-prod project, load your testing data into the production branch.** This production branch will serve as the primary source for all dev/test environments. - **Create ephemeral environments via child branches.** For each ephemeral environment, create a child branch from the production branch. These branches are fully isolated in terms of resources and come with an up-to-date copy of your testing dataset. - **Automate the process.** Use CI/CD and automations to streamline your workflow. You can reset child branches with one click to keep them in sync with the production branch as needed, maintaining data consistency across your dev/test environments. ## Designing a Control Plane Once you have everything set up, as your number of projects grows, you might want to create a control plane to stay on top of everything in a centralized manner. ### The catalog database The catalog database is a centralized repository that tracks and manages all Neon projects and databases. It holds records for every Neon project your system creates. You can also use it to keep track of tenant-specific configurations, such as database names, regions, schema versions, and so on. You can set up your catalog database as a separate Neon project. When it's time to design its schema, consider these tips: - Use foreign keys to link tables like `project` and `payment` to `customer`. - Choose data types carefully: `citext` for case-insensitive text, `uuid` for unique identifiers to obscure sequence data, and `timestamptz` for tracking real-world time. - Track key operational data, like `schema_version`, in the `project` table. - Index wisely! While the catalog will likely remain smaller than user databases, it will grow—especially with recurring events like payments—so indexing is crucial for control plane performance at scale. - Start with essential data fields and plan for future extensions as needs evolve. - Standard Neon metadata (e.g., compute size, branch info) is accessible via the console. Avoid duplicating it in the catalog database unless separate access adds significant complexity. ### Automations To effectively scale a multitenant architecture, leveraging automation tools is essential. The Neon API will allow you to automate various tasks, such as creating and managing projects, setting usage limits, and configuring resources. Beyond the API, Neon offers several integrations to streamline your workflows: - **GitHub Actions** Neon's [GitHub integration](https://neon.com/docs/guides/neon-github-integration) allows you to automate database branching workflows directly from your repositories. By connecting a Neon project to a GitHub repository, you can set up actions that create or delete database branches in response to pull request events, facilitating isolated testing environments for each feature or bug fix. - **Vercel Integration** You can [connect your Vercel projects to Neon](https://neon.com/docs/guides/neon-github-integration), creating database branches for each preview deployment. - **CI/CD pipelines** By combining Neon branching into your CI/CD, you can simplify your dev/test workflows by creating and deleting ephemeral environments automatically as child branches. - **Automated backups to your own S3** If you must keep your own data copy, you can [schedule regular backups](https://neon.com/docs/manage/backups-aws-s3-backup-part-2) using tools like `pg_dump` in conjunction with GitHub Actions. ## The Application Layer Although the application layer isn't our main focus, a common question developers ask us when approaching a multitenant architecture is: _Do I deploy one application environment per database, or connect all databases to a single application environment?_ Both approaches are viable, each with its own pros and cons. ### Shared application environments #### Pros of shared environments - Managing a single application instance minimizes operational complexity. - Updates and new features are easy to implement since changes apply universally. - Operating one environment reduces infrastructure and maintenance costs. #### Cons of shared environments - A single application environment makes it difficult to offer tailored experiences for individual customers. - Compliance becomes challenging when users' databases span multiple regions. - Updates apply to all users simultaneously, which can be problematic for those needing specific software versions. - A single environment heightens the risk of data breaches, as vulnerabilities can impact all users. #### Advice - **Implement robust authorization** Ensure secure access as all users share the same application environment. - **Define user authentication and data routing** - Users provide their organization details during login. - Users access the application via an organization-specific subdomain. - The system identifies the user's organization based on their credentials. - **Monitor usage and performance** Regularly track application usage to prevent performance bottlenecks. - **Plan maintenance windows carefully** Minimize disruptions for all users by scheduling maintenance during low-usage periods. ### Isolated application environments In this architecture, each customer has instead a dedicated application environment alongside their own database. Similar to the shared environment option, this design has pros and cons: #### Pros of isolated environments - Since each customer can now have a unique application environment, it's easier to implement personalized features and configurations, to keep separate versions for particular customers, and so on. - Compliance is also simpler if you're handling multiple regions. Deploying the application in multiple regions can also help with latency. - This design also opens the door for customers to control their own upgrade schedules, e.g., via defining their own maintenance windows. #### Cons of isolated environments - This design has an obvious tradeoff: it comes with higher complexity of deployment, monitoring, and maintenance. - You'll need to think about how to route optimal resource utilization across multiple environments, and how to keep observability on-point to diagnose issues. - Operating separate environments for each customer might also lead to higher costs. #### Advice If you decide to implement isolated environments, here's some advice to consider: - Design your architecture to accommodate growth, even if your setup is small today. - Similarly as you're doing with Neon projects, take advantage of automation tools to streamline the creation and management of your application environments. - Set up proper monitoring to track key metrics across all environments. ## Migrating Schemas In a database-per-user design, it is common to have the same schema for all users/databases. Any changes to the user schema will most likely be rolled out to all individual databases simultaneously. In this section, we teach you how to use DrizzleORM, GitHub Actions, the Neon API, and a couple of custom template scripts to manage many databases using the same database schema. ### Example app To walk you through it, we've created example code [in this repository](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle). The example includes 4 Neon databases, all using Postgres 16 and all deployed to AWS us-east-1. The schema consists of three tables, `users`, `projects` and `tasks`. You can see the schema here: [schema.ts](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle/blob/main/src/db/schema.ts), and for good measure, here's the raw SQL equivalent: [schema.sql](https://github.com/PaulieScanlon/neon-database-per-tenant-drizzle/blob/main/schema.sql). This default schema is referenced by each of the `drizzle.config.ts` files that have been created for each customer. ### Workflow using Drizzle ORM and GitHub Actions #### Creating Neon projects via a CLI script Our example creates new Neon projects via the command line, using the following script: ```javascript // src/scripts/create.js import { Command } from 'commander'; import { createApiClient } from '@neondatabase/api-client'; import 'dotenv/config'; const program = new Command(); const neonApi = createApiClient({ apiKey: process.env.NEON_API_KEY, }); program.option('-n, --name ', 'Name of the company').parse(process.argv); const options = program.opts(); if (options.name) { console.log(`Company Name: ${options.name}`); (async () => { try { const response = await neonApi.createProject({ project: { name: options.name, pg_version: 16, region_id: 'aws-us-east-1', }, }); const { data } = response; console.log(data); } catch (error) { console.error('Error creating project:', error); } })(); } else { console.log('No company name provided'); } ``` This script utilizes the `commander` library to create a simple command-line interface (CLI) and the Neon API's `createProject` method to set up a new project. Ensure that your Neon API key is stored in an environment variable named `NEON_API_KEY`. To execute the script and create a new Neon project named "ACME Corp" with PostgreSQL version 16 in the aws-us-east-1 region, run: ```bash npm run create -- --name="ACME Corp" ``` In this example, the same approach was used to create the following projects: - ACME Corp - Payroll Inc - Finance Co - Talent Biz To interact with the Neon API, you'll need to generate an API key. For more information, refer to the Neon documentation on [creating an API key](https://api-docs.neon.tech/reference/createapikey). #### Generating a workflow to prepare for migrations ```javascript // src/scripts/generate.js import { existsSync, mkdirSync, writeFileSync } from 'fs'; import { execSync } from 'child_process'; import { createApiClient } from '@neondatabase/api-client'; import { Octokit } from 'octokit'; import 'dotenv/config'; import { encryptSecret } from '../utils/encrypt-secret.js'; import { drizzleConfig } from '../templates/drizzle-config.js'; import { githubWorkflow } from '../templates/github-workflow.js'; const octokit = new Octokit({ auth: process.env.PERSONAL_ACCESS_TOKEN }); const neonApi = createApiClient({ apiKey: process.env.NEON_API_KEY }); const repoOwner = 'neondatabase-labs'; const repoName = 'neon-database-per-tenant-drizzle'; let secrets = []; (async () => { // Ensure configs directory exists if (!existsSync('./configs')) { mkdirSync('./configs'); } // Ensure .github/workflows directory exists if (!existsSync('./.github/workflows')) { mkdirSync('./.github/workflows', { recursive: true }); } try { // Get all projects const response = await neonApi.listProjects(); const { projects } = response.data; // Loop through each project for (const project of projects) { // Get connection details for the project const connectionDetails = await neonApi.getConnectionDetails({ projectId: project.id, branchId: project.default_branch_id, }); const { connection_string } = connectionDetails.data; // Create a drizzle config file for each project const configFileName = `${project.name.toLowerCase().replace(/\s+/g, '-')}.config.ts`; writeFileSync(`./configs/${configFileName}`, drizzleConfig(connection_string, project.name)); // Create a GitHub workflow file for each project const workflowFileName = `${project.name.toLowerCase().replace(/\s+/g, '-')}.yml`; writeFileSync( `./.github/workflows/${workflowFileName}`, githubWorkflow(project.name, configFileName) ); // Encrypt the connection string for GitHub Actions const publicKey = await octokit.request( 'GET /repos/{owner}/{repo}/actions/secrets/public-key', { owner: repoOwner, repo: repoName, } ); const secretName = `${project.name.toUpperCase().replace(/\s+/g, '_')}_CONNECTION_STRING`; const encryptedValue = await encryptSecret(connection_string, publicKey.data.key); secrets.push({ secret_name: secretName, encrypted_value: encryptedValue, key_id: publicKey.data.key_id, }); } // Output instructions for setting up GitHub secrets console.log('Generated config files and workflows for all projects.'); console.log('\nTo set up GitHub secrets, run the following commands:'); for (const secret of secrets) { console.log(`\nnpx octokit request PUT /repos/${repoOwner}/${repoName}/actions/secrets/${secret.secret_name} \\ -H "Accept: application/vnd.github.v3+json" \\ -f encrypted_value="${secret.encrypted_value}" \\ -f key_id="${secret.key_id}"`); } } catch (error) { console.error('Error generating files:', error); } })(); ``` This script automates the setup process for managing multiple Neon databases. It: 1. Retrieves all Neon projects using the Neon API. 2. For each project, it generates a Drizzle configuration file with the appropriate connection string. 3. Creates a GitHub workflow file for each project to handle schema migrations. 4. Encrypts the connection strings for secure storage as GitHub secrets. 5. Outputs instructions for setting up the required GitHub secrets. The script uses template files for the Drizzle configuration and GitHub workflow, which are defined in separate modules: ```javascript // src/templates/drizzle-config.js export const drizzleConfig = (connectionString, projectName) => { return `import type { Config } from 'drizzle-kit'; export default { schema: './src/db/schema.ts', out: './drizzle', driver: 'pg', dbCredentials: { connectionString: process.env.${projectName.toUpperCase().replace(/\s+/g, '_')}_CONNECTION_STRING || '${connectionString}', }, verbose: true, strict: true, } satisfies Config; `; }; ``` ```javascript // src/templates/github-workflow.js export const githubWorkflow = (projectName, configFileName) => { const secretName = projectName.toUpperCase().replace(/\s+/g, '_'); const jobName = projectName.toLowerCase().replace(/\s+/g, '-'); return `name: ${projectName} DB Migration on: push: branches: - main paths: - 'src/db/schema.ts' workflow_dispatch: jobs: migrate-${jobName}: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: '18' cache: 'npm' - name: Install dependencies run: npm ci - name: Run migration env: ${secretName}_CONNECTION_STRING: \${{ secrets.${secretName}_CONNECTION_STRING }} run: npx drizzle-kit push:pg --config=./configs/${configFileName} `; }; ``` The encryption utility for GitHub secrets is implemented as follows: ```javascript // src/utils/encrypt-secret.js import { createPublicKey, publicEncrypt } from 'crypto'; import { Buffer } from 'buffer'; export const encryptSecret = async (secret, publicKeyString) => { const publicKey = createPublicKey({ key: Buffer.from(publicKeyString, 'base64'), format: 'der', type: 'spki', }); const encryptedSecret = publicEncrypt( { key: publicKey, padding: 1, // RSA_PKCS1_PADDING }, Buffer.from(secret) ); return encryptedSecret.toString('base64'); }; ``` To generate the configuration files and GitHub workflows for all your Neon projects, run: ```bash npm run generate ``` This will create: 1. A Drizzle configuration file for each project in the `configs` directory. 2. A GitHub workflow file for each project in the `.github/workflows` directory. 3. Instructions for setting up the required GitHub secrets. #### Running migrations Once everything is set up, you can run migrations manually for a specific project using: ```bash npx drizzle-kit push:pg --config=./configs/acme-corp.config.ts ``` Or, if you've set up the GitHub workflows as described, migrations will automatically run whenever you push changes to the `src/db/schema.ts` file on the main branch. ### S3 Backups In addition to managing schemas, you might want to set up regular backups of your databases. This section explains how to configure AWS IAM roles and policies for GitHub Actions to securely access S3 for backing up your Neon databases. #### AWS IAM configuration First, GitHub must be added as an identity provider to allow the Action to use your AWS credentials. To create a new Identity Provider, navigate to IAM > Access Management > Identity Providers, and click Add provider. On the next screen select OpenID Connect and add the following to the Provider URL and Audience fields. 1. Provider URL: https://token.actions.githubusercontent.com 2. Audience: sts.amazonaws.com Now, you must create a role, which is an identity that you can assume to obtain temporary security credentials for specific tasks or actions within AWS. Navigate to **IAM > Access Management > Roles**, and click **Create role**. On the next screen you can create a Trusted Identity for the Role. Select **Trusted Identity**. On the next screen, select **Web Identity**, then select `token.actions.githubusercontent.com` from the **Identity Provider** dropdown menu. Once you select the Identity Provider, you'll be shown a number of fields to fill out. Select `sts.amazonaws.com` from the **Audience** dropdown menu, then fill out the GitHub repository details as per your requirements. When you're ready, click **Next**. For reference, the options shown in the image below are for this repository. You can skip selecting anything from the Add Permissions screen and click **Next** to continue. On this screen give the **Role** a name and description. You'll use the Role name in the code for the GitHub Action. When you're ready click **Create role**. Now you need to create a policy for the role. Navigate to **IAM > Access Management > Policies**, and click **Create policy**. On the next screen, select the **JSON** tab and paste the following policy. This policy allows the role to list, get, put, and delete objects in the specified S3 bucket. Replace `your-bucket-name` with the name of your S3 bucket. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::your-bucket-name"] }, { "Effect": "Allow", "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"], "Resource": ["arn:aws:s3:::your-bucket-name/*"] } ] } ``` Click **Next** and give the policy a name and description. When you're ready, click **Create policy**. Now you need to attach the policy to the role. Navigate to **IAM > Access Management > Roles**, and click on the role you created earlier. Click **Add permissions**, then **Attach policies**. Search for the policy you just created, select it, and click **Add permissions**. #### GitHub secrets You'll need to add the following secrets to your GitHub repository: - `AWS_ACCOUNT_ID`: Your AWS account ID - `IAM_ROLE`: In my case this would be, neon-multiple-db-s3-backups-github-action ### Scheduled pg_dump/restore GitHub Action Before diving into the code, here's a look at this example in the Neon console dashboard. There are three databases set up for three fictional customers, all running Postgres 16 and all are deployed to us-east-1. We will be backing up each database into its own folder within an S3 bucket, with different schedules and retention periods. All the code in this example lives [in this repository](https://github.com/neondatabase-labs/neon-multiple-db-s3-backups). Using the same naming conventions, there are three new files in the `.github/workflows` folder in the repository: 1. `paycorp-payments-prod.yml` 2. `acme-analytics-prod.yml` 3. `paycorp-payments-prod.yml` All the Actions are technically the same, (besides the name of the file), but there are several areas where they differ. These are: 1. The workflow name 2. The `DATABASE_URL` 3. The `RETENTION` period For example, in the first `.yml` file, the workflow name is `acme-analytics-prod`, the `DATABASE_URL` points to `secrets.ACME_ANALYTICS_PROD`, and the `RETENTION` period is 7 days. Here's the full Action, and below the code snippet, we'll explain how it all works. ```yaml // .github/workflows/acme-analytics-prod.yml name: acme-analytics-prod on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: jobs: db-backup: runs-on: ubuntu-latest permissions: id-token: write env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_ANALYTICS_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '16' steps: - name: Install PostgreSQL run: | sudo apt install -y postgresql-common yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV - name: Create folder if it doesn't exist run: | if ! aws s3api head-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" 2>/dev/null; then aws s3api put-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" fi - name: Run pg_dump run: | /usr/lib/postgresql/${{ env.PG_VERSION }}/bin/pg_dump ${{ env.DATABASE_URL }} | gzip > "${{ env.GZIP_NAME }}" - name: Empty bucket of old files run: | THRESHOLD_DATE=$(date -d "-${{ env.RETENTION }} days" +%Y-%m-%dT%H:%M:%SZ) aws s3api list-objects --bucket ${{ env.S3_BUCKET_NAME }} --prefix "${{ env.FOLDER_NAME }}/" --query "Contents[?LastModified<'${THRESHOLD_DATE}'] | [?ends_with(Key, '.gz')].{Key: Key}" --output text | while read -r file; do aws s3 rm "s3://${{ env.S3_BUCKET_NAME }}/${file}" done - name: Upload to bucket run: | aws s3 cp "${{ env.GZIP_NAME }}" "${{ env.UPLOAD_PATH }}" --region ${{ env.AWS_REGION }} ``` Starting from the top, there are a few configuration options: #### Action configuration ```yaml name: acme-analytics-prod on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: ``` - `name`: This is the workflow name and will also be used when creating the folder in the S3 bucket. - `cron`: This determines how often the Action will run, take a look a the GitHub docs where the [POSIX cron syntax](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule) is explained. #### Environment variables ```yaml env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_ANALYTICS_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '16' ``` - `RETENTION`: This determines how long a backup file should remain in the S3 bucket before it's deleted. - `DATABASE_URL`: This is the Neon Postgres connection string for the database you're backing up. - `IAM_ROLE`: This is the name of the AWS IAM Role. - `AWS_ACCOUNT_ID`: This is your AWS Account ID. - `S3_BUCKET_NAME`: This is the name of the S3 bucket where all backups are being stored. - `AWS_REGION`: This is the region where the S3 bucket is deployed. - `PG_VERSION`: This is the version of Postgres to install. #### GitHub Secrets As we mentioned above, several of the above environment variables are defined using secrets. These variables can be added to **Settings > Secrets and variables > Actions**. Here's a screenshot of the GitHub repository secrets including the connection string for the fictional ACME Analytics Prod database. #### Action steps This step installs Postgres into the GitHub Action's virtual environment. The version to install is defined by the `PG_VERSION` environment variable. **Install Postgres** ```yaml - name: Install PostgreSQL run: | sudo apt install -y postgresql-common yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} ``` **Configure AWS credentials** This step configures AWS credentials within the GitHub Action virtual environment, allowing the workflow to interact with AWS services securely. ```yaml - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} ``` **Set file, folder and path variables** In this step I've created three variables that are all output to `GITHUB_ENV`. This allows me to access the values from other steps in the Action. ```yaml - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV ``` The three variables are as follows: 1. `GZIP_NAME`: The name of the `.gz` file derived from the date which would produce a file name similar to, `October-21-2024@07:53:02.gz` 2. `FOLDER_NAME`: The folder where the `.gz` files are to be uploaded 3. `UPLOAD_PATH`: This is the full path that includes the S3 bucket name, folder name and `.gz` file **Create folder if it doesn't exist** This step creates a new folder (if one doesn't already exist) inside the S3 bucket using the `FOLDER_NAME` as defined in the previous step. ## Final remarks You can create as many of these Actions as you need. Just be careful to double check the `DATABASE_URL` to avoid backing up a database to the wrong folder. **Important**: GitHub Actions will timeout after ~6 hours. The size of your database and how you've configured it will determine how long the `pg_dump` step takes. If you do experience timeout issues, you can self host [GitHub Action runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). --- # Source: https://neon.com/llms/guides-neon-features.txt # Neon feature guides > The Neon feature guides document details the functionalities and usage of Neon's platform features, assisting users in effectively managing and optimizing their database operations within the Neon environment. ## Source - [Neon feature guides HTML](https://neon.com/docs/guides/neon-features): The original HTML version of this documentation ## Autoscaling Automatically scale compute resources up and down based on demand. - [Learn about autoscaling](https://neon.com/docs/introduction/autoscaling): Find out how autoscaling can reduce your costs. - [Enable autoscaling](https://neon.com/docs/guides/autoscaling-guide): Enable autoscaling to automatically scale compute resources on demand ## Scale to zero Enable or disable scale to zero for your Neon computes. - [Learn about scale to zero](https://neon.com/docs/introduction/scale-to-zero): Discover how Neon can reduce your compute to zero when not in use - [Configure scale to zero](https://neon.com/docs/guides/scale-to-zero-guide): Enable or disable scale to zero to control if your compute suspends due to inactivity ## Branching Branch data the same way you branch your code. - [Learn about branching](https://neon.com/docs/introduction/branching): With Neon, you can instantly branch your data in the same way that you branch your code - [Instant restore](https://neon.com/docs/guides/branching-pitr): Restore your data to a past state with database branching - [Test queries on a branch](https://neon.com/docs/guides/branching-test-queries): Use branching to test queries before running them in production - [Branching with the CLI](https://neon.com/docs/guides/branching-neon-cli): Create and manage branches with the Neon CLI - [Branching with the API](https://neon.com/docs/guides/branching-neon-api): Create and manage branches with the Neon API - [Branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions): Automate branching with GitHub Actions - [Refresh a branch](https://neon.com/docs/guides/branch-refresh): Refresh a development branch with the Neon API ## Logical replication Replicate data from Neon to external data platforms and services. - [Logical replication guide](https://neon.com/docs/guides/logical-replication-guide): Get started with logical replication in Neon - [Logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts): Learn about Postgres logical replication concepts - [Logical replication commands](https://neon.com/docs/guides/logical-replication-manage): Commands for managing your logical replication configuration - [Logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon): Information about logical replication specific to Neon ## Read replicas Learn how Neon read replicas can help you scale and manage read-only workloads. - [Learn about read replicas](https://neon.com/docs/introduction/read-replicas): Learn how Neon maximizes scalability and more with read replicas - [Create and manage Read Replicas](https://neon.com/docs/guides/read-replica-guide): Learn how to create, connect to, configure, delete, and monitor read replicas - [Scale your app with Read Replicas](https://neon.com/docs/guides/read-replica-integrations): Scale your app with read replicas using built-in framework support - [Run analytics queries with Read Replicas](https://neon.com/docs/guides/read-replica-data-analysis): Leverage read replicas for running data-intensive analytics queries - [Run ad-hoc queries with Read Replicas](https://neon.com/docs/guides/read-replica-adhoc-queries): Leverage read replicas for running ad-hoc queries - [Provide read-only access with Read Replicas](https://neon.com/docs/guides/read-only-access-read-replicas): Leverage read replicas to provide read-only access to your data ## Time Travel Travel back in time to view your database's history. - [Learn about Time Travel](https://neon.com/docs/guides/time-travel-assist): Learn how to query point-in-time connections against your data's history - [Time Travel tutorial](https://neon.com/docs/guides/time-travel-tutorial): Use Time Travel to analyze changes made to your database over time ## Schema Diff Compare your database branches. - [Learn about Schema Diff](https://neon.com/docs/guides/schema-diff): Learn how to use Neon's Schema Diff tool to compare branches of your database - [Schema Diff tutorial](https://neon.com/docs/guides/schema-diff-tutorial): Step-by-step guide showing you how to compare two development branches using Schema Diff ## Project collaboration Invite other users to collaborate on your Neon project. - [Collaborate on your Neon project](https://neon.com/docs/guides/project-collaboration-guide): Give other users access to your project from the Neon Console, API, and CLI ## IP Allow Limit access to trusted IP addresses. - [Define your IP allowlist](https://neon.com/docs/introduction/ip-allow): Learn how to limit database access to trusted IP addresses ## Protected branches Protect your production or sensitive data. - [Configure protected branches](https://neon.com/docs/guides/protected-branches): Learn how to use Neon's protected branches feature to secure access to critical data ## Private Networking Secure your database connections with private access. - [Private Networking](https://neon.com/docs/guides/neon-private-networking): Learn how to connect your application to a Neon database via AWS PrivateLink, bypassing the open internet --- # Source: https://neon.com/llms/guides-neon-github-integration.txt # The Neon GitHub integration > The document outlines the process for integrating Neon with GitHub, enabling users to automate workflows and manage database operations directly from their GitHub repositories. ## Source - [The Neon GitHub integration HTML](https://neon.com/docs/guides/neon-github-integration): The original HTML version of this documentation The Neon GitHub integration connects your Neon project to a GitHub repository, streamlining database development within your overall application development workflow. For instance, you can configure GitHub Actions to create a database branch for each pull request and automatically apply schema changes to that database branch. To help you get started, we provide a [sample GitHub Actions workflow](https://neon.com/docs/guides/neon-github-integration#add-the-github-actions-workflow-to-your-repository). ## How it works The integration installs the GitHub App, letting you select which repositories you want to make accessible to Neon. When you connect a Neon project to a GitHub repository, the integration sets a Neon API key secret and Neon project ID variable in your repository, which are used by your GitHub Actions workflow to interact with your Neon project. **Note**: The [sample GitHub Actions workflow](https://neon.com/docs/guides/neon-github-integration#add-the-github-actions-workflow-to-your-repository) we provide is intended as a basic template you can expand on or customize to build your own workflows. This guide walks you through the following steps: - Installing the GitHub App - Connecting a Neon project to a GitHub repository - Adding the sample GitHub Actions workflow to your repository ## Prerequisites - You have a Neon account and project. If not, see [Sign up for a Neon account](https://neon.com/docs/get-started/signing-up). - You have a GitHub account with an application repository that you want to connect to your Neon project. ## Install the GitHub App and connect your Neon project To get started: 1. In the Neon Console, navigate to the **Integrations** page in your Neon project. 2. Locate the **GitHub** card and click **Add**. 3. On the **GitHub** drawer, click **Install GitHub App**. 4. If you have more than one GitHub account, select the account where you want to install the GitHub app. 5. Select whether to install and authorize the GitHub app for **All repositories** in your GitHub account or **Only select repositories**. - Selecting **All repositories** authorizes the app on all repositories in your GitHub account, meaning that you can to connect your Neon project to any of them. - Selecting **Only select repositories** authorizes the app on one or more repositories, meaning that you can only connect your Neon project to the selected repositories (you can authorize additional repositories later if you need to). 6. If you authorized the app on **All repositories** or multiple repositories, select a GitHub repository to connect to the current Neon project, and click **Connect**. If you authorized the GitHub app on a single GitHub repository, you have already completed this step. You are directed to the **Actions** tab on the final page of the setup, where a sample GitHub Actions workflow is provided. You can copy this workflow to your GitHub repository to establish a basic database branching process. For instructions, see [Add the GitHub Actions workflow to your repository](https://neon.com/docs/guides/neon-github-integration#add-the-github-actions-workflow-to-your-repository). ## Add the GitHub Actions workflow to your repository The sample GitHub Actions workflow includes: - A [Create branch action](https://neon.com/docs/guides/branching-github-actions#create-branch-action) that creates a new Neon branch in your Neon project when you open or reopen a pull request in the connected GitHub repository. - Code that you can uncomment to add a database migration command to your workflow. - Code that you can uncomment to add a [Schema diff action](https://neon.com/docs/guides/branching-github-actions#schema-diff-action) that diffs database schemas and posts the diff as a comment in your pull request. - A [Delete branch action](https://neon.com/docs/guides/branching-github-actions#delete-branch-action) that deletes the Neon branch from your Neon project when you close the pull request. ```yaml name: Create/Delete Branch for Pull Request on: pull_request: types: - opened - reopened - synchronize - closed concurrency: group: ${{ github.workflow }}-${{ github.ref }} jobs: setup: name: Setup outputs: branch: ${{ steps.branch_name.outputs.current_branch }} runs-on: ubuntu-latest steps: - name: Get branch name id: branch_name uses: tj-actions/branch-names@v8 create_neon_branch: name: Create Neon Branch outputs: db_url: ${{ steps.create_neon_branch_encode.outputs.db_url }} db_url_with_pooler: ${{ steps.create_neon_branch_encode.outputs.db_url_with_pooler }} needs: setup if: | github.event_name == 'pull_request' && ( github.event.action == 'synchronize' || github.event.action == 'opened' || github.event.action == 'reopened') runs-on: ubuntu-latest steps: - name: Create Neon Branch id: create_neon_branch uses: neondatabase/create-branch-action@v5 with: project_id: ${{ vars.NEON_PROJECT_ID }} branch_name: preview/pr-${{ github.event.number }}-${{ needs.setup.outputs.branch }} api_key: ${{ secrets.NEON_API_KEY }} # The step above creates a new Neon branch. # You may want to do something with the new branch, such as run migrations, run tests # on it, or send the connection details to a hosting platform environment. # The branch DATABASE_URL is available to you via: # "${{ steps.create_neon_branch.outputs.db_url_with_pooler }}". # It's important you don't log the DATABASE_URL as output as it contains a username and # password for your database. # # For example, you can uncomment the lines below to run a database migration command: # - name: Run Migrations # run: npm run db:migrate # env: # DATABASE_URL: "${{ steps.create_neon_branch.outputs.db_url_with_pooler }}" # # You can also add a Schema Diff action to compare the database schema on the new # branch with the base branch. This action automatically writes the schema differences # as a comment on your GitHub pull request, making it easy to review changes. # Following the step above, which runs database migrations, you may want to check # for schema changes in your database. We recommend using the following action to # post a comment to your pull request with the schema diff. For this action to work, # you also need to give permissions to the workflow job to be able to post comments # and read your repository contents. Add the following permissions to the workflow job: # # permissions: # contents: read # pull-requests: write # # You can also check out https://github.com/neondatabase/schema-diff-action for more # information on how to use the schema diff action. # You can uncomment the lines below to enable the schema diff action. # - name: Post Schema Diff Comment to PR # uses: neondatabase/schema-diff-action@v1 # with: # project_id: \${{ vars.NEON_PROJECT_ID }} # compare_branch: preview/pr-\${{ github.event.number }}-\${{ needs.setup.outputs.branch }} # api_key: \${{ secrets.NEON_API_KEY }} delete_neon_branch: name: Delete Neon Branch needs: setup if: github.event_name == 'pull_request' && github.event.action == 'closed' runs-on: ubuntu-latest steps: - name: Delete Neon Branch uses: neondatabase/delete-branch-action@v3 with: project_id: ${{ vars.NEON_PROJECT_ID }} branch: preview/pr-${{ github.event.number }}-${{ needs.setup.outputs.branch }} api_key: ${{ secrets.NEON_API_KEY }} ``` **Tip**: The step outputs from the `create_neon_branch` action will only be available within the same job (`create_neon_branch`). Therefore, write all test code, migrations, and related steps in that job itself. The outputs are marked as secrets. If you need separate jobs, refer to [GitHub's documentation on workflow commands](https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-commands#workflow) for patterns on how to handle this. To add the workflow to your repository: 1. In your repository, create a workflow file in the `.github/workflows` directory; for example, create a file named `neon_workflow.yml`. - If the `.github/workflows` directory already exists, add the file. - If your repository doesn't have a `.github/workflows` directory, add the file `.github/workflows/neon-workflow.yml`. This creates the `.github` and `workflows` directories and the `neon-workflow.yml` file. If you need more help with this step, see [Creating your first workflow](https://docs.github.com/en/actions/quickstart#creating-your-first-workflow), in the _GitHub documentation_. **Note**: For GitHub to discover GitHub Actions workflows, you must save the workflow files in a directory called `.github/workflows` in your repository. You can name the workflow file whatever you like, but you must use `.yml` or `.yaml` as the file name extension. 2. Copy the workflow code into your `neon-workflow.yml` file. 3. Commit your changes. ### Using the GitHub Actions workflow To use the sample workflow, create a pull request in your GitHub application repository. This will trigger the `Create Neon Branch` action. You can verify that a branch was created on the **Branches** page in the Neon Console. You should see a new branch with a `preview/pr-` name prefix. Closing the pull request removes the Neon branch from the Neon project, which you can also verify on the **Branches** page in the Neon Console. To view workflow results in GitHub, follow the instructions in [Viewing your workflow results](https://docs.github.com/en/actions/quickstart#viewing-your-workflow-results), in the _GitHub documentation_. ## Building your own GitHub Actions workflow The sample workflow provided by the GitHub integration serves as a template, which you can expand on or customize. The workflow uses Neon's create branch, delete branch, and schema diff GitHub Actions, which you can find here: - [Create a Neon Branch](https://github.com/neondatabase/create-branch-action) - [Delete a Neon Branch](https://github.com/neondatabase/delete-branch-action) - [Schema Diff](https://github.com/neondatabase/schema-diff-action) Neon also offers a [Reset a Neon Branch](https://github.com/neondatabase/reset-branch-action) action that allows you to reset a database branch to match the current state of its parent branch. This action is useful in a feature-development workflow, where you may need to reset a development branch to the current state of your production branch before beginning work on a new feature. To incorporate the reset action into your workflow, you can use code like this, tailored to your specific requirements: ```yaml reset_neon_branch: name: Reset Neon Branch needs: setup if: | contains(github.event.pull_request.labels.*.name, 'Reset Neon Branch') && github.event_name == 'pull_request' && (github.event.action == 'synchronize' || github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled') runs-on: ubuntu-latest steps: - name: Reset Neon Branch uses: neondatabase/reset-branch-action@v1 with: project_id: ${{ vars.NEON_PROJECT_ID }} parent: true branch: preview/pr-${{ github.event.number }}-${{ needs.setup.outputs.branch }} api_key: ${{ secrets.NEON_API_KEY }} ``` You can integrate Neon's GitHub Actions into your workflow, develop custom actions, or combine Neon's actions with those from other platforms or services. If you're new to GitHub Actions and workflows, GitHub's [Quickstart for GitHub Actions](https://docs.github.com/en/actions/quickstart) is a good place to start. ## Example applications with GitHub Actions workflows The following example applications utilize GitHub Actions workflows to create and delete branches in Neon. These examples can serve as references when building your own workflows. **Note**: The Neon GitHub integration configures a `NEON_API_KEY` secret and a `PROJECT_ID` variable in your GitHub repository. Depending on the specific example application, additional or different variables and secrets may have been used. As you develop your workflows, you might also need to incorporate various other variables and secrets. - [Automated Database Branching with GitHub Actions](https://neon.com/guides/neon-github-actions-authomated-branching): Learn how to automate database branching for your application using Neon and GitHub Actions - [Preview branches with Cloudflare Pages](https://github.com/neondatabase/preview-branches-with-cloudflare): Demonstrates using GitHub Actions workflows to create a Neon branch for every Cloudflare Pages preview deployment - [Preview branches with Vercel](https://github.com/neondatabase/preview-branches-with-vercel): Demonstrates using GitHub Actions workflows to create a Neon branch for every Vercel preview deployment - [Preview branches with Fly.io](https://github.com/neondatabase/preview-branches-with-fly): Demonstrates using GitHub Actions workflows to create a Neon branch for every Fly.io preview deployment - [Neon Twitter app](https://github.com/neondatabase/neon_twitter): Demonstrates using GitHub Actions workflows to create a Neon branch for schema validation and perform migrations ## Connect more Neon projects with the GitHub App If you've installed the GitHub app previously, it's available to use with any project in your Neon account. To connect another Neon project to a GitHub repository: 1. In the Neon Console, navigate to the **Integrations** page in your Neon project. 2. Locate the **GitHub** integration and click **Add**. 3. Select a GitHub repository to connect to your Neon project, and click **Connect**. **Note**: Connecting to the same GitHub repository from different Neon projects is not supported. ## Secret and variable set by the GitHub integration When connecting a Neon project to a GitHub repository, the GitHub integration performs the following actions: - Generates a Neon API key for your Neon account - Creates a `NEON_API_KEY` secret in your GitHub repository - Adds a `NEON_PROJECT_ID` variable to your GitHub repository The `NEON_API_KEY` allows you to run any [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) method or [Neon CLI](https://neon.com/docs/reference/neon-cli) command, which means you can develop actions and workflows that create, update, and delete various objects in Neon such as projects, branches, databases, roles, and computes. The `NEON_PROJECT_ID` variable defines the Neon project that is connected to the repository. Operations run on Neon via the Neon API or CLI typically require specifying the Neon project ID, as a Neon account may have more than one Neon project. The sample GitHub Actions workflow provided by the Neon GitHub integration depends on these variables and secrets to perform actions in Neon. **Note**: The variables and secrets are removed if you disconnect a Neon project from the associated GitHub repository. The items are removed for all Neon projects and associated repositories if you remove the Neon GitHub integration from your Neon account. See [Remove the GitHub integration](https://neon.com/docs/guides/neon-github-integration#remove-the-github-integration). ### Neon API key To view the Neon API key created by the integration: 1. In the [Neon Console](https://console.neon.tech), click your profile at the top right corner of the page. 2. Select **Account settings**. 3. Select **API keys**. The API key created by the integration should be listed with a name similar to the following: **API key for GitHub (cool-darkness-12345678)**. You cannot view the key itself, only the name it was given, the time it was created, and when the key was last used. ### Neon project ID variable and Neon API key secret To view the variable containing your Neon project ID: 1. Navigate to your GitHub account page. 2. From your GitHub profile menu, select **Your repositories**. 3. Select the repository that you chose when installing the Neon GitHub integration. 4. On the repository page, select the **Settings** tab. 5. Select **Secrets and variables** > **Actions** from the sidebar. Your `NEON_API_KEY` secret is listed on the **Secrets** tab, and the `NEON_PROJECT_ID` variable is listed on the **Variables** tab. ## Disconnect a Neon project from a GitHub repository Disconnecting a Neon project from a GitHub repository performs the following actions for the Neon project: - Removes the Neon API key created for this integration from your Neon account. - Removes the GitHub secret containing the Neon API key from the associated GitHub repository. - Removes the GitHub variable containing your Neon project ID from the associated GitHub repository. Any GitHub Actions workflows you've added to the GitHub repository that are dependent on these secrets and variables will no longer work. To disconnect your Neon project: 1. In the Neon Console, navigate to the **Integrations** page for your project. 2. Locate the GitHub integration and click **Manage** to open the **GitHub integration** drawer. 3. Click **Disconnect**. ## Remove the GitHub integration Removing the GitHub integration performs the following actions for all Neon projects that you connected to a GitHub repository using the GitHub integration: - Removes the Neon API keys created for Neon-GitHub integrations from your Neon account. - Removes GitHub secrets containing the Neon API keys from the associated GitHub repositories. - Removes the GitHub variables containing your Neon project IDs from the associated GitHub repositories. Any GitHub Actions workflows you've added to GitHub repositories that are dependent on these secrets and variables will no longer work. To remove the GitHub integration: 1. In the Neon Console, navigate your account Profile. 2. Select **Account settings**. 3. Select **Integrations**. 4. Click **Remove**. ## Resources - [Creating GitHub Actions](https://docs.github.com/en/actions/creating-actions) - [Quickstart for GitHub Actions](https://docs.github.com/en/actions/quickstart) - [Database Branching Workflows](https://neon.com/branching) - [Database branching workflow guide for developers](https://neon.com/blog/database-branching-workflows-a-guide-for-developers) ## Feedback and future improvements If you've got feature requests or feedback about what you'd like to see from the Neon GitHub integration, let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/guides-neon-managed-vercel-integration.txt # Connecting with the Neon-Managed Integration > The document outlines the process for connecting to a Neon database using the Neon-managed integration with Vercel, detailing configuration steps and necessary settings for seamless integration. ## Source - [Connecting with the Neon-Managed Integration HTML](https://neon.com/docs/guides/neon-managed-vercel-integration): The original HTML version of this documentation What you will learn: - [The purpose of the Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration#about-this-integration) - [How to install it from Connectable Accounts](https://neon.com/docs/guides/neon-managed-vercel-integration#installation-steps) - [How automated Preview Branching works](https://neon.com/docs/guides/neon-managed-vercel-integration#how-preview-branching-works) - [How to manage environment variables and branch cleanup](https://neon.com/docs/guides/neon-managed-vercel-integration#managing-the-integration) Related topics: - [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration) - [Manual Connections](https://neon.com/docs/guides/vercel-manual) --- ## About this integration The **Neon-Managed Integration** links your existing Neon project to a Vercel project while keeping billing in Neon. Instead of sharing a single database across all preview deployments, this integration creates an isolated database branch for each preview deployment. **Key features:** - One-click connection from Vercel Marketplace - Automatic database branches for each preview deployment - Environment variable injection (`DATABASE_URL`, `DATABASE_URL_UNPOOLED`, `PG*` variables) - Automatic cleanup when branches are deleted **Note** Who should use this integration?: Choose the Neon-Managed Integration if you already have a Neon account/project or prefer to manage billing directly with Neon. --- ## Prerequisites Before you begin, ensure you have: - A Neon account with at least one project and database role - A Vercel account with a project linked to GitHub, GitLab, or Bitbucket --- ## Installation steps ## Connect from Neon Console In the [Neon Console](https://console.neon.tech), navigate to **Integrations** and click **Add** under Vercel. Click **Install from Vercel Marketplace** to open the integration in Vercel. ## Add the integration in Vercel On the Vercel page, click **Install**. This opens the **Install Neon** modal where you can choose between two options. Select **Link Existing Neon Account**, then click **Continue**. **Tip**: Alternatively, if you're accessing this directly from the Vercel Marketplace, locate the **Connectable Accounts** section, find **Neon**, and click **Add**. This differs from the **Native Integrations** section in the Vercel Marketplace. ## Configure the connection Choose which Vercel account and projects can use this integration. Each Neon project connects to exactly one Vercel project. Selecting **All projects** makes the integration available to other Vercel projects. ## Set up project integration In the **Integrate Neon** dialog: 1. **Select your Vercel project** 2. **Choose your Neon project, database, and role** 3. **Configure optional settings:** - Enable **Create a branch for your development environment** to create a persistent `vercel-dev` branch and set Vercel development environment variables for it. The `vercel-dev` branch is a clone of your project's default branch (`main`) that you can modify without affecting data on your default branch. - Enable **Automatically delete obsolete Neon branches** (recommended) to clean up branches when git branches are deleted. 4. Click **Connect**, then **Done** ### What happens after installation Once connected successfully, you'll see: **In Neon Console:** - A `vercel-dev` branch (if enabled) under **Branches** - Future preview branches will appear here automatically **In Vercel:** - `DATABASE_URL` and other environment variables under **Settings → Environment Variables** --- ## How Preview Branching works The integration automatically creates isolated database environments for each preview deployment: ## Developer pushes to feature branch When you push commits to a feature branch, Vercel triggers a preview deployment. ## Integration creates Neon branch The integration receives a webhook from Vercel and creates a new Neon branch named `preview/` using the Neon API. ## Environment variables injected Vercel receives the new connection string and injects it as environment variables for that specific deployment only. This isolation allows you to test data and schema changes safely in each pull request. To apply schema changes automatically, add migration commands to your Vercel build configuration: 1. Go to **Vercel Dashboard → Settings → Build and Deployment Settings** 2. Enable **Override** and add your build commands, including migrations, for example: ```bash npx prisma migrate deploy && npm run build ``` This ensures schema changes in your commits are applied to each preview deployment's database branch. --- ## Managing the integration ### Environment variables The integration sets both modern (`DATABASE_URL`, `DATABASE_URL_UNPOOLED`) and legacy PostgreSQL variables (`POSTGRES_URL`, `PGHOST`, etc.) for Production and Development environments. Preview variables are injected dynamically per deployment. - `DATABASE_URL`: Pooled connection (recommended for most applications) - `DATABASE_URL_UNPOOLED`: Direct connection (for tools requiring direct database access) **To customize which variables are used:** 1. Go to **Neon Console → Integrations → Manage → Settings** 2. Select the variables you want (e.g., `PGHOST`, `PGUSER`, etc.) 3. Click **Save changes** ### Branch cleanup **Automatic cleanup (recommended):** Enable **Automatically delete obsolete Neon branches** during setup to remove preview branches automatically when the corresponding Git branch is deleted. **Note**: This Git-branch-based cleanup differs from the [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration), which deletes branches when deployments are deleted (either manually or automatically via Vercel's retention policies). **Manual cleanup:** If needed, you can delete branches manually: - **Individual branches:** Neon Console → Integrations → Manage → Branches → trash icon - **Bulk delete:** Use **Delete all** in the same interface - **API/CLI:** Use Neon CLI or API for programmatic cleanup **Warning** Important cleanup considerations: - **Don't rename branches:** Renaming either the Git branch or Neon branch breaks name-matching logic and may cause unintended deletions - **Avoid child branches:** Creating child branches on preview branches prevents automatic deletion - **Role dependency:** The integration depends on the selected role - removing it will break the integration ### Disconnect integration To disconnect the integration: **Neon Console → Integrations → Manage → Disconnect**. This stops creating new preview branches but doesn't remove existing branches or the integration from Vercel. --- ## Limitations - **One-to-one relationship:** Each Neon project connects to exactly one Vercel project - **Integration exclusivity:** Cannot coexist with the Vercel-Managed Integration in the same Vercel project - **Role dependency:** Integration requires the selected PostgreSQL role to remain active --- ## Next steps ## After Installation - [ ] [Test preview branching](https://neon.com/docs/guides/neon-managed-vercel-integration#how-preview-branching-works) Create a feature branch and push changes to verify preview deployments work correctly - [ ] [Configure build commands](https://neon.com/docs/guides/neon-managed-vercel-integration#how-preview-branching-works) Add migration commands to Vercel's build settings if using an ORM like Prisma - [ ] [Set up branch cleanup](https://neon.com/docs/guides/neon-managed-vercel-integration#branch-cleanup) Enable automatic cleanup or establish a manual cleanup process - [ ] [Customize environment variables](https://neon.com/docs/guides/neon-managed-vercel-integration#environment-variables) Review and adjust which database variables are injected into your deployments --- ## Troubleshooting ### Environment variable conflicts If you see "Failed to set environment variables" during setup, remove conflicting variables in Vercel first: 1. Go to **Vercel → Settings → Environment Variables** 2. Remove or rename existing `DATABASE_URL`, `PGHOST`, `PGUSER`, `PGDATABASE`, or `PGPASSWORD` variables 3. Retry the integration setup ### Integration stops working **Issue:** Preview branches no longer created **Cause:** The PostgreSQL role selected during setup was deleted **Solution:** Reinstall the integration with a valid role, or change the role in **Neon Console → Integrations → Manage → Settings** --- # Source: https://neon.com/llms/guides-neon-private-networking.txt # Neon Private Networking > The Neon Private Networking documentation outlines the setup and configuration of private networking within Neon, enabling secure and isolated network environments for database instances. ## Source - [Neon Private Networking HTML](https://neon.com/docs/guides/neon-private-networking): The original HTML version of this documentation **Comingsoon** Private Networking availability: Private Networking is available on Neon's [Scale](https://neon.com/docs/introduction/plans#scale) plan. If you're on a different plan, you can request a trial from the **Network Security** page in your project's settings. The **Neon Private Networking** feature enables secure connections to your Neon databases via [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html), bypassing the open internet for enhanced security. ## Overview In a standard setup, the client application connects to a Neon database over the open internet via the Neon proxy. With **Neon Private Networking**, you can connect to your database via AWS PrivateLink instead of the open internet. In this setup, the client application connects through an [AWS endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html) (provided by Neon) to a Neon proxy instance that is not accessible from the public internet. This endpoint service is available only within the same AWS region as your client application. With **Neon Private Networking**, all traffic between the client application and the Neon database stays within AWS's private network, rather than crossing the public internet. ## Prerequisites - You must be a Neon [Business](https://neon.com/docs/introduction/plans#business) and [Scale](https://neon.com/docs/introduction/plans#scale) account user, and your user account must be [Neon organization](https://neon.com/docs/manage/organizations) Admin account. You'll encounter an access error if you attempt the setup from a personal Neon account or on a Neon plan that does not offer Private Networking. - **Ensure that your client application is deployed on AWS in the same region as the Neon database you plan to connect to.** The Private Networking feature is available in all [Neon-supported AWS regions](https://neon.com/docs/introduction/regions#aws-regions). Both your private access client application and Neon database must be in one of these regions. - Neon Private Networking supports both [IPv4](https://en.wikipedia.org/wiki/Internet_Protocol_version_4) and [IPv6](https://en.wikipedia.org/wiki/IPv6). - Install the Neon CLI. You will use it to add your VPC endpoint ID to your Neon organization. For installation instructions, see [Neon CLI — Install and connect](https://neon.com/docs/reference/cli-install). ## Configuration steps To configure Neon Private Networking, perform the following steps: ## Create an AWS VPC endpoint **Important**: Do not enable **private DNS names** for the VPC endpoint until [Step 3](https://neon.com/docs/guides/neon-private-networking#enable-private-dns). You must add the VPC endpoint to your Neon organization first, as described in [Step 2](https://neon.com/docs/guides/neon-private-networking#add-your-vpc-endpoint-id-to-your-neon-organization). 1. Go to the AWS **VPC > Endpoints** dashboard and select **Create endpoint**. Make sure you create the endpoint in the same VPC as your client application. 1. Optionally, enter a **Name tag** for the endpoint (e.g., `My Neon Private Networking`). 1. For **Type**, select the **Endpoint services that use NLBs and GWLBs** category. 1. Under **Service settings**, specify the **Service name**. Some regions require specifying two or three service names, and service names vary by region: - **us-east-1**: Create three entries, one for each of the following: - `com.amazonaws.vpce.us-east-1.vpce-svc-0de57c578b0e614a9` - `com.amazonaws.vpce.us-east-1.vpce-svc-02a0abd91f32f1ed7` - `com.amazonaws.vpce.us-east-1.vpce-svc-0f37140e9710ee3af` - **us-east-2**: Create two entries, one for each of the following: - `com.amazonaws.vpce.us-east-2.vpce-svc-010736480bcef5824` - `com.amazonaws.vpce.us-east-2.vpce-svc-0465c21ce8ba95fb2` - **eu-central-1**: Create two entries, one for each of the following: - `com.amazonaws.vpce.eu-central-1.vpce-svc-05554c35009a5eccb` - `com.amazonaws.vpce.eu-central-1.vpce-svc-05a252e6836f01cfd` - **eu-west-2**: - `com.amazonaws.vpce.eu-west-2.vpce-svc-0c6fedbe99fced2cd` - **us-west-2**: Create two entries, one for each of the following: - `com.amazonaws.vpce.us-west-2.vpce-svc-060e0d5f582365b8e` - `com.amazonaws.vpce.us-west-2.vpce-svc-07b750990c172f22f` - **ap-southeast-1**: - `com.amazonaws.vpce.ap-southeast-1.vpce-svc-07c68d307f9f05687` - **ap-southeast-2**: - `com.amazonaws.vpce.ap-southeast-2.vpce-svc-031161490f5647f32` - **sa-east-1**: - `com.amazonaws.vpce.sa-east-1.vpce-svc-061204a851dbd1a47` 1. Click **Verify service**. If successful, you should see a `Service name verified` message. If not successful, ensure that your service name matches the region where you're creating the VPC endpoint. 1. Select the VPC where your application is deployed. 1. Add the availability zones and associated subnets you want to support. 1. Click **Create endpoint** to complete the setup of the endpoint service. 1. Note your **VPC Endpoint ID**. You will need it in the next step. ## Add your VPC Endpoint ID to your Neon organization Assign your **VPC Endpoint ID** to your Neon organization. If the region has multiple **Service Names**, please assign all **VPC Endpoint IDs**. You can do this using the Neon CLI or API. **Note**: Please note that you must assign the **VPC Endpoint ID**, not the VPC ID. Tab: CLI In the following example, the VPC endpoint ID is assigned to a Neon organization in the specified AWS region using the [neon vpc endpoint](https://neon.com/docs/reference/cli-vpc#the-vpc-endpoint-subcommand) command. ```bash neon vpc endpoint assign vpce-1234567890abcdef0 --org-id org-bold-bonus-12345678 --region-id aws-us-east-2 ``` You can find your Neon organization ID in your Neon organization settings, or you can run this Neon CLI command: `neon orgs list` Tab: API You can use the [Assign or update a VPC endpoint](https://api-docs.neon.tech/reference/assignorganizationvpcendpoint) API to assign a VPC endpoint ID to a Neon organization. You will need to provide your Neon organization ID, region ID, VPC endpoint ID, and a [Neon API key](https://neon.com/docs/manage/api-keys). ```bash curl --request POST \ --url https://console.neon.tech/api/v2/organizations/org-bold-bonus-12345678/vpc/region/aws-us-east-2/vpc_endpoints/vpce-1234567890abcdef0 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' ``` Optionally, you can limit access to a Neon project by allowing connections only from a specific VPC endpoint. For instructions, see [Assigning a VPC endpoint restrictions](https://neon.com/docs/guides/neon-private-networking#assigning-a-vpc-endpoint-restriction). ## Enable Private DNS After adding your VPC endpoint ID to your Neon organization, enable private DNS lookup for the VPC endpoint in AWS. 1. In AWS, select the VPC endpoint you created. 1. Choose **Modify private DNS name**. 1. Select **Enable for this endpoint**. 1. Save your changes. ## Check your database connection string Your Neon database connection string does not change when using Private Networking. To verify that your connection is working correctly, you can perform a DNS lookup on your Neon endpoint hostname from within your AWS VPC. It should resolve to the private IP address of the VPC endpoint. For example, if your Neon database connection string is: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` You can run the following command from an EC2 instance inside your AWS VPC: ```bash nslookup ep-cool-darkness-123456.us-east-2.aws.neon.tech ``` ## Restrict public internet access At this point, it's still possible to connect to a database in your Neon project over the public internet using a database connection string. You can restrict public internet access to your Neon project via the Neon CLI or API. Tab: CLI To block access via the Neon CLI, use the [neon projects update](https://neon.com/docs/reference/cli-projects#update) command with the `--block-public-connections` option. ```bash neon projects update orange-credit-12345678 --block-public-connections true ``` In the example above, `orange-credit-12345678` is the Neon project ID. You can find _your_ Neon project ID under your project's settings in the Neon Console, or by running this Neon CLI command: `neon projects list` Tab: API To block access via the Neon API, use the [Update project](https://api-docs.neon.tech/reference/updateproject) endpoint with the `block_public_connections` settings object attribute. ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/orange-credit-12345678 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "settings": { "block_public_connections": true } } } ' ``` ## Assigning a VPC endpoint restriction You can limit access to a Neon project by allowing connections only from specified VPC endpoints. Use the Neon CLI or API to set a restriction. Tab: CLI You can specify a CLI command similar to the following to restrict project access: ```bash neon vpc project restrict vpce-1234567890abcdef0 --project-id orange-credit-12345678 ``` You will need to provide the VPC endpoint ID and your Neon project ID. If the region has multiple **Service Names**, all **VPC Endpoint IDs** must be restricted in the way as above. You can find your Neon project ID under your project's settings in the Neon Console, or by running this Neon CLI command: `neon projects list` After adding a restriction, you can check the status of the VPC endpoint to view the restricted project using the [vpc endpoint status command](https://neon.com/docs/reference/cli-vpc#the-vpc-endpoint-subcommand). You will need to provide your VPC endpoint ID, region ID, and Neon organization ID. ```bash neon vpc endpoint status vpce-1234567890abcdef0 --region-id=aws-eu-central-1 --org-id=org-nameless-block-72040075 ┌────────────────────────┬───────┬─────────────────────────┬─────────────────────────────┐ │ Vpc Endpoint Id │ State │ Num Restricted Projects │ Example Restricted Projects │ ├────────────────────────┼───────┼─────────────────────────┼─────────────────────────────┤ │ vpce-1234567890abcdef0 │ new │ 1 │ orange-credit-12345678 │ └────────────────────────┴───────┴─────────────────────────┴─────────────────────────────┘ ``` Tab: API The Neon API supports managing project restrictions using the [Assign or update a VPC endpoint restriction](https://api-docs.neon.tech/reference/assignprojectvpcendpoint) endpoint. You will need to provide your VPC endpoint ID, Neon project ID, and a [Neon API key](https://neon.com/docs/manage/api-keys). ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/orange-credit-12345678/vpc_endpoints/vpce-1234567890abcdef0 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{"label":"my_vpc"}' ``` After adding a restriction, you can check the status of the VPC endpoint to view the restricted project using the [Retrieve VPC endpoint details](https://api-docs.neon.tech/reference/getorganizationvpcendpointdetails) API. You will need to provide your VPC endpoint ID, region ID, Neon organization ID, and a Neon API key. ```bash curl --request GET \ --url https://console.neon.tech/api/v2/organizations/org-nameless-block-72040075/vpc/region/aws-eu-central-1/vpc_endpoints/vpce-1234567890abcdef0 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` ## Managing Private Networking using the Neon CLI You can use the Neon CLI `vpc` command to manage Private Networking configurations in Neon. The `vpc` command includes `endpoint` and `project` subcommands for managing VPC endpoints and project-level VPC endpoint restrictions: - **`vpc endpoint`** – List, assign, remove, and retrieve the status of VPC endpoints for a Neon organization. - **`vpc project`** – List, configure, or remove VPC endpoint restrictions for specific Neon projects. For more details and examples, see [Neon CLI commands — vpc](https://neon.com/docs/reference/cli-vpc). ## Managing Private Networking using the Neon API The Neon API provides endpoints for managing VPC endpoints and project-level VPC endpoint restrictions: ### APIs for managing VPC endpoints - [List VPC endpoints](https://api-docs.neon.tech/reference/listorganizationvpcendpoints) - [Assign or update a VPC endpoint](https://api-docs.neon.tech/reference/assignorganizationvpcendpoint) - [Retrieve VPC endpoint configuration details](https://api-docs.neon.tech/reference/getorganizationvpcendpointdetails) - [Delete a VPC endpoint](https://api-docs.neon.tech/reference/deleteorganizationvpcendpoint) ### APIs for managing VPC endpoint restrictions - [Get VPC endpoint restrictions](https://api-docs.neon.tech/reference/listprojectvpcendpoints) - [Assign or update a VPC endpoint restriction](https://api-docs.neon.tech/reference/assignprojectvpcendpoint) - [Delete a VPC endpoint restriction](https://api-docs.neon.tech/reference/deleteprojectvpcendpoint) ## Private Networking limits The Private Networking feature supports a maximum of **10 private networking configurations per AWS region**. Supported AWS regions are listed [above](https://neon.com/docs/guides/neon-private-networking#create-an-aws-vpc-endpoint). ## Limitations If you remove a VPC endpoint from a Neon organization, that VPC endpoint cannot be added back to the same Neon organization. Attempting to do so will result in an error. In this case, you must set up a new VPC endpoint. --- # Source: https://neon.com/llms/guides-neon-twin-full-pg-dump-restore.txt # Full Twin > The "Full Twin" documentation guides Neon users on how to create a complete replica of a PostgreSQL database using the pg_dump and restore process, ensuring data consistency and redundancy. ## Source - [Full Twin HTML](https://neon.com/docs/guides/neon-twin-full-pg-dump-restore): The original HTML version of this documentation This workflow will create a full Neon Twin using `pg_dump` and `pg_restore`. **Note**: To use this workflow, you'll need the Postgres connection string for your Neon database. Follow our [Getting Started Guide](https://neon.com/docs/get-started/signing-up#sign-up) to learn how. ## Create the workflow To create the Twin workflow in any GitHub-hosted repository: 1. Create a new directory named `.github` at the root of your project. 2. Inside this directory, create another directory named `workflows`. 3. Within the `workflows` directory, create a new file named `create-neon-twin.yml`. ``` .github |-- workflows |-- create-neon-twin.yml ``` Add the following code to `create-neon-twin.yml`. ```yml name: Create Neon Twin on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: env: PROD_DATABASE_URL: ${{ secrets.PROD_DATABASE_URL }} # Production or primary database DEV_DATABASE_URL: ${{ secrets.DEV_DATABASE_URL }} # Development database PG_VERSION: '17' jobs: dump-and-restore: runs-on: ubuntu-latest steps: - name: Install PostgreSQL run: | sudo apt update yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} - name: Set PostgreSQL binary path run: echo "POSTGRES=/usr/lib/postgresql/${{ env.PG_VERSION }}/bin" >> $GITHUB_ENV - name: Dump and restore data run: | $POSTGRES/pg_dump -Fc -f "${{ github.workspace }}/dump-file.bak" "${{ env.PROD_DATABASE_URL }}" $POSTGRES/pg_restore --clean --no-owner --no-acl --if-exists -d "${{ env.DEV_DATABASE_URL }}" "${{ github.workspace }}/dump-file.bak" ``` ## GitHub Action explained Below is an explanation of each part of the GitHub Action. ### on - `name`: The name of the Action as it appears in the GitHub UI. - `cron`: The [POSIX cron syntax](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/crontab.html#tag_20_25_07) that defines when the Action will run. - `workflow_dispatch`: Enables manual triggering through the GitHub UI. ### env - `PROD_DATABASE_URL`: The PostgreSQL connection string for your production database. - `DEV_DATABASE_URL`: The PostgreSQL connection string for your Neon database. - `PG_VERSION`: The version of PostgreSQL to install in the Action environment. ### steps - `Install PostgreSQL`: Installs the specified version of PostgreSQL into the Action environment from the [Apt](https://wiki.debian.org/Apt) repository. - `Set PostgreSQL binary path`: Creates `$POSTGRES` variable for use in subsequent steps. - `Dump and restore data`: Uses `pg_dump` to create a `dump-file.bak` and `pg_restore` to read the `dump-file.bak` and apply it to the `DEV_DATABASE_URL`. ### pg_dump flags The table below provides an explanation of each flag used by `pg_dump`. | Flag | Meaning | | ---- | --------------------------------------------------------------- | | -FC | Dumps the database in a custom format. | | -f | Specifies the output file where the schema dump will be stored. | ### pg_restore flags The table below provides an explanation of each flag used by `pg_restore`. | Flag | Meaning | | ----------- | -------------------------------------------------------------------------------------------------------------- | | --clean | Drops existing database objects before recreating them, ensuring a clean restore. | | --no-owner | Ignores ownership information in the dump file, so restored objects are owned by the user running the restore. | | --no-acl | Excludes access control (GRANT/REVOKE) statements from the restore, preventing permission changes. | | --if-exists | Ensures that `DROP` commands (used with `--clean`) only execute if the object exists, preventing errors. | | -d | Specifies the target database to restore into. | ## Setting repository secrets Before running the Action, ensure that both `PROD_DATABASE_URL` and `DEV_DATABASE_URL` are added to your GitHub repository secrets. In your repository, go to **Settings** > **Secrets and variables** > **Actions** to add them. ## Testing the workflow To manually trigger your workflow go to **Actions** > **Create Neon Twin** then click **Run workflow**. From the dropdown, click the **Run workflow** button. ## Syncing with migration changes The GitHub Action runs on a recurring schedule, but you may also want it to trigger when migration changes are applied and a Pull Request is merged. To enable this, update the Action with the following code: ### Handling Pull Request Events Add a `pull_request` event and configure it to listen for merges into the main `branch`. ```diff on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC pull_request: types: [closed] branches: - main workflow_dispatch: ``` ### Add Concurrency and Conditions To prevent conflicts between scheduled runs and runs triggered by a Pull Request, set `cancel-in-progress` to `true` under `concurrency`. Additionally, add an `if` statement to ensure the job only executes when specific conditions are met. ```diff jobs: dump-and-restore: runs-on: ubuntu-latest concurrency: group: 'dump-and-restore' cancel-in-progress: true if: | github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true) ``` ## Limitations Be aware of [usage limits](https://docs.github.com/en/actions/administering-github-actions/usage-limits-billing-and-administration#usage-limits): Each GitHub Action job can run for up to 6 hours. If a job exceeds this limit, it will be terminated and fail to complete. If your dump/restore process takes longer, consider using [self-hosted runners](https://neon.com/guides/gihub-actions-self-hosted-runners). ## Further reading - [Neon Twin: Move Dev/Test/Staging to Neon, Keep Production on RDS](https://neon.com/blog/optimizing-dev-environments-in-aws-rds-with-neon-postgres-part-ii-using-github-actions-to-mirror-rds-in-neon) - [Neon Twin: How to deploy a change tested in Neon to prod in RDS](https://neon.com/blog/neon-twin-deploy-workflow) --- # Source: https://neon.com/llms/guides-neon-twin-intro.txt # Create a Neon Twin > The document outlines the process for creating a Neon Twin, which allows users to clone a database environment in Neon for testing and development purposes without affecting the original data. ## Source - [Create a Neon Twin HTML](https://neon.com/docs/guides/neon-twin-intro): The original HTML version of this documentation ## What is a Neon Twin? A Neon Twin is a full or partial clone of your production or staging database, providing developers and teams with isolated, sandboxed environments that closely mirror production. ## Designed for efficiency Creating a Neon Twin will streamline development workflows, enhance productivity, and help teams ship faster—all while being more [cost-effective](https://neon.com/docs/introduction/pricing-estimation-guide) and easier to manage than traditional development/testing environments. ## Automatically synced The workflows in this section enable automatic synchronization between your production database and your Neon Twin. ## Instant Branches With a Neon Twin created, [branches](https://neon.com/docs/introduction/branching) can be quickly spun up or torn down, enabling developers to build new features or debug issues—all within their own isolated environments with a dedicated compute resource. Branches can be created and managed through the [Neon console](https://console.neon.tech/) or programmatically via the [API](https://neon.com/docs/reference/api-reference). --- # Source: https://neon.com/llms/guides-neon-twin-partial-pg-dump-restore.txt # Partial Twin > The "Partial Twin" documentation guides Neon users on how to perform partial PostgreSQL database dumps and restores using the Neon platform, enabling efficient data management and recovery processes. ## Source - [Partial Twin HTML](https://neon.com/docs/guides/neon-twin-partial-pg-dump-restore): The original HTML version of this documentation This workflow will create a partial Neon Twin using `pg_dump`, `pg_restore` and `psql`. **Note**: To use this workflow, you'll need the Postgres connection string for your Neon database. Follow our [Getting Started Guide](https://neon.com/docs/get-started/signing-up#sign-up) to learn how. ## Create the workflow To create the Twin workflow in any GitHub-hosted repository: 1. Create a new directory named `.github` at the root of your project. 2. Inside this directory, create another directory named `workflows`. 3. Within the `workflows` directory, create a new file named `create-neon-twin.yml`. ``` .github |-- workflows |-- create-neon-twin.yml ``` Add the following code to `create-neon-twin.yml`. ```yml name: Create Neon Twin on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC workflow_dispatch: env: PROD_DATABASE_URL: ${{ secrets.PROD_DATABASE_URL }} # Production or primary database DEV_DATABASE_URL: ${{ secrets.DEV_DATABASE_URL }} # Development database PG_VERSION: '17' jobs: dump-and-restore: runs-on: ubuntu-latest steps: - name: Install PostgreSQL run: | sudo apt update yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} - name: Set PostgreSQL binary path run: echo "POSTGRES=/usr/lib/postgresql/${{ env.PG_VERSION }}/bin" >> $GITHUB_ENV - name: Dump schema run: | $POSTGRES/pg_dump -Fc --schema-only -f "${{ github.workspace }}/all-schema.bak" "${{ env.PROD_DATABASE_URL }}" - name: Dump data run: | $POSTGRES/psql "${{ env.PROD_DATABASE_URL }}" -c "\copy (SELECT * FROM users ORDER BY user_id DESC LIMIT 50) TO '${{ github.workspace }}/users-subset.csv' WITH CSV HEADER" - name: Drop tables and schema run: | $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "DROP SCHEMA IF EXISTS public CASCADE;" $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "CREATE SCHEMA public;" - name: Restore schema run: | $POSTGRES/pg_restore --clean --no-owner --no-acl --if-exists --schema-only -d "${{ env.DEV_DATABASE_URL }}" "${{ github.workspace }}/all-schema.bak" - name: Restore data run: | $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "\copy public.users FROM '${{ github.workspace }}/users-subset.csv' WITH CSV HEADER" ``` ## GitHub Action explained Below is an explanation of each part of the GitHub Action. ### on - `name`: The name of the Action as it appears in the GitHub UI. - `cron`: The [POSIX cron syntax](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/crontab.html#tag_20_25_07) that defines when the Action will run. - `workflow_dispatch`: Enables manual triggering through the GitHub UI. ### env - `PROD_DATABASE_URL`: The PostgreSQL connection string for your production database. - `DEV_DATABASE_URL`: The PostgreSQL connection string for your Neon database. - `PG_VERSION`: The version of PostgreSQL to install in the Action environment. ### steps - `Install PostgreSQL`: Installs the specified version of PostgreSQL into the Action environment from the [Apt](https://wiki.debian.org/Apt) repository. - `Set PostgreSQL binary path`: Creates `$POSTGRES` variable for use in subsequent steps. - `Dump schema`: Exports the database schema (table structures, indexes, constraints) from the production database into a backup file. - `Dump data`: Extracts a subset of data (50 most recent users) from the production database into a CSV file. - `Drop tables and schema`: Completely removes the existing schema in the development database and recreates it to ensure a clean state. - `Restore schema` : Imports the previously dumped schema into the development database, re-creating table structures and constraints. - `Restore data` : Loads the extracted data subset from the CSV file into the corresponding table in the development database. ### pg_dump flags The table below provides an explanation of each flag used by `pg_dump`. | Flag | Meaning | | ------------- | -------------------------------------------------------------------------------- | | -FC | Dumps the database in a custom format. | | --schema-only | Dumps only the schema (table structures, indexes, constraints) without any data. | | -f | Specifies the output file where the schema dump will be stored. | ### psql flags The table below provides an explanation of each flag used by `pg_dump`. | Flag | Meaning | | ---- | ------------------------------------------ | | -c | Executes a single command and then exits. | | -d | Specifies the database name to connect to. | ### pg_restore flags The table below provides an explanation of each flag used by `pg_restore`. | Flag | Meaning | | ------------- | -------------------------------------------------------------------------------------------------------------- | | --clean | Drops existing database objects before recreating them, ensuring a clean restore. | | --no-owner | Ignores ownership information in the dump file, so restored objects are owned by the user running the restore. | | --no-acl | Excludes access control (GRANT/REVOKE) statements from the restore, preventing permission changes. | | --if-exists | Ensures that `DROP` commands (used with `--clean`) only execute if the object exists, preventing errors. | | --schema-only | Restores only the schema (table structures, indexes, constraints) without inserting any data. | | -d | Specifies the target database to restore into. | ## Working with multiple tables The action above works well for dumping data from a single table. However, when working with multiple tables that have foreign key relationships, it's important to ensure that those relationships remain intact. For example, if you're dumping a subset of data from a transactions table that references a `product_id` from the products table and a `user_id` from the users table, you must also query the corresponding products and users data. This ensures that all referenced `product_id` and `user_id` values exist in the restored dataset, maintaining valid foreign key constraints. To account for this, you may need to adjust the **Dump data** and **Restore data** steps accordingly. For example, here is an amended example for the **Dump data** step. ```yml - name: Dump data run: | $POSTGRES/psql "${{ env.PROD_DATABASE_URL }}" -c "\copy (SELECT * FROM products ORDER BY product_id DESC LIMIT 50) TO '${{ github.workspace }}/products-subset.csv' WITH CSV HEADER" $POSTGRES/psql "${{ env.PROD_DATABASE_URL }}" -c "\copy (SELECT * FROM transactions WHERE product_id IN (SELECT product_id FROM products ORDER BY product_id DESC LIMIT 10)) TO '${{ github.workspace }}/transactions-subset.csv' WITH CSV HEADER" $POSTGRES/psql "${{ env.PROD_DATABASE_URL }}" -c "\copy (SELECT * FROM users WHERE user_id IN (SELECT user_id FROM transactions WHERE product_id IN (SELECT product_id FROM products ORDER BY product_id DESC LIMIT 50))) TO '${{ github.workspace }}/users-subset.csv' WITH CSV HEADER" ``` And here is an example for the amended **Restore data** step. ```yml - name: Restore data run: | $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "\copy public.users FROM '${{ github.workspace }}/users-subset.csv' WITH CSV HEADER" $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "\copy public.products FROM '${{ github.workspace }}/products-subset.csv' WITH CSV HEADER" $POSTGRES/psql "${{ env.DEV_DATABASE_URL }}" -c "\copy public.transactions FROM '${{ github.workspace }}/transactions-subset.csv' WITH CSV HEADER" ``` ## Setting repository secrets Before running the Action, ensure that both `PROD_DATABASE_URL` and `DEV_DATABASE_URL` are added to your GitHub repository secrets. In your repository, go to **Settings** > **Secrets and variables** > **Actions** to add them. ## Testing the workflow To manually trigger your workflow go to **Actions** > **Create Neon Twin** then click **Run workflow**. From the dropdown, click the **Run workflow** button. ## Syncing with migration changes The GitHub Action runs on a recurring schedule, but you may also want it to trigger when migration changes are applied and a Pull Request is merged. To enable this, update the Action with the following code: ### Handling Pull Request Events Add a `pull_request` event and configure it to listen for merges into the main `branch`. ```diff on: schedule: - cron: '0 0 * * *' # Runs at midnight UTC pull_request: types: [closed] branches: - main workflow_dispatch: ``` ### Add Concurrency and Conditions To prevent conflicts between scheduled runs and runs triggered by a Pull Request, set `cancel-in-progress` to `true` under `concurrency`. Additionally, add an `if` statement to ensure the job only executes when specific conditions are met. ```diff jobs: dump-and-restore: runs-on: ubuntu-latest concurrency: group: 'dump-and-restore' cancel-in-progress: true if: | github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true) ``` ## Limitations Be aware of [usage limits](https://docs.github.com/en/actions/administering-github-actions/usage-limits-billing-and-administration#usage-limits): Each GitHub Action job can run for up to 6 hours. If a job exceeds this limit, it will be terminated and fail to complete. If your dump/restore process takes longer, consider using [self-hosted runners](https://neon.com/guides/gihub-actions-self-hosted-runners). ## Further reading - [Automate Partial Data Dumps with PostgreSQL and GitHub Actions](https://neon.com/blog/automate-partial-data-dumps-with-postgresql-and-github-actions) - [Neon Twin: How to deploy a change tested in Neon to prod in RDS](https://neon.com/blog/neon-twin-deploy-workflow) --- # Source: https://neon.com/llms/guides-nestjs.txt # Connect a NestJS application to Neon > This document guides users on connecting a NestJS application to a Neon database by detailing the necessary configuration steps and code examples specific to Neon's environment. ## Source - [Connect a NestJS application to Neon HTML](https://neon.com/docs/guides/nestjs): The original HTML version of this documentation NestJS is a framework for building efficient, scalable Node.js server-side applications1. This guide explains how to connect NestJS with Neon using a secure server-side request. To create a Neon project and access it from a NestJS application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a NestJS project and add dependencies 1. Create a NestJS project if you do not have one. For instructions, see [Quick Start](https://docs.nestjs.com/first-steps), in the NestJS documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your connection details by clicking **Connect** on the Neon **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client ### 1. Create a Database Module To manage the connection to your Neon database, start by creating a **DatabaseModule** in your NestJS application. This module will handle the configuration and provisioning of the Postgres client. Tab: node-postgres ```typescript import { config } from 'dotenv'; import { Module } from '@nestjs/common'; import pg from 'pg'; // Load Environment Variables config({ path: ['.env', '.env.production', '.env.local'], }); const sql = new pg.Pool({ connectionString: process.env.DATABASE_URL }); const dbProvider = { provide: 'POSTGRES_POOL', useValue: sql, }; @Module({ providers: [dbProvider], exports: [dbProvider], }) export class DatabaseModule {} ``` Tab: postgres.js ```typescript import { config } from 'dotenv'; import { Module } from '@nestjs/common'; import postgres from 'postgres'; // Load Environment Variables config({ path: ['.env', '.env.production', '.env.local'], }); const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); const dbProvider = { provide: 'POSTGRES_POOL', useValue: sql, }; @Module({ providers: [dbProvider], exports: [dbProvider], }) export class DatabaseModule {} ``` Tab: Neon serverless driver ```typescript import { config } from 'dotenv'; import { Module } from '@nestjs/common'; import { neon } from '@neondatabase/serverless'; // Load Environment Variables config({ path: ['.env', '.env.production', '.env.local'], }); const sql = neon(process.env.DATABASE_URL); const dbProvider = { provide: 'POSTGRES_POOL', useValue: sql, }; @Module({ providers: [dbProvider], exports: [dbProvider], }) export class DatabaseModule {} ``` ### 2. Create a Service for Database Interaction Next, implement a service to facilitate interaction with your Postgres database. This service will use the database connection defined in the DatabaseModule. Tab: node-postgres ```typescript import { Injectable, Inject } from '@nestjs/common'; @Injectable() export class AppService { constructor(@Inject('POSTGRES_POOL') private readonly sql: any) {} async getTable(name: string): Promise { const client = await this.sql.connect(); const { rows } = await client.query(`SELECT * FROM ${name}`); return rows; } } ``` Tab: postgres.js ```typescript import { Injectable, Inject } from '@nestjs/common'; @Injectable() export class AppService { constructor(@Inject('POSTGRES_POOL') private readonly sql: any) {} async getTable(name: string): Promise { return await this.sql(`SELECT * FROM ${name}`); } } ``` Tab: Neon serverless driver ```typescript import { Injectable, Inject } from '@nestjs/common'; @Injectable() export class AppService { constructor(@Inject('POSTGRES_POOL') private readonly sql: any) {} async getTable(name: string): Promise { return await this.sql(`SELECT * FROM ${name}`); } } ``` ### 3. Integrate the Database Module and Service Import and inject the DatabaseModule and AppService into your AppModule. This ensures that the database connection and services are available throughout your application. ```typescript import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; import { AppService } from './app.service'; import { DatabaseModule } from './database/database.module'; @Module({ imports: [DatabaseModule], controllers: [AppController], providers: [AppService], }) export class AppModule {} ``` ### 4. Define a Controller Endpoint Finally, define a `GET` endpoint in your AppController to fetch data from your Postgres database. This endpoint will use the AppService to query the database. ```typescript import { Controller, Get } from '@nestjs/common'; import { AppService } from './app.service'; @Controller('/') export class AppController { constructor(private readonly appService: AppService) {} @Get() async getTable() { return this.appService.getTable('playing_with_neon'); } } ``` ## Run the app When you run `npm run start` you can expect to see output similar to the following at [localhost:3000](localhost:3000): ```shell [{"id":1,"name":"c4ca4238a0","value":0.39330545},{"id":2,"name":"c81e728d9d","value":0.14468245}] ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with NestJS and Neon](https://github.com/neondatabase/examples/tree/main/with-nestjs) --- # Source: https://neon.com/llms/guides-netlify-functions.txt # Use Neon with Netlify Functions > The document outlines the steps to integrate Neon with Netlify Functions, detailing the process of setting up a serverless function to connect to a Neon database using environment variables and connection pooling. ## Source - [Use Neon with Netlify Functions HTML](https://neon.com/docs/guides/netlify-functions): The original HTML version of this documentation [Netlify Functions](https://www.netlify.com/products/functions/) provide a serverless execution environment for building and deploying backend functionality without managing server infrastructure. It's integrated with Netlify's ecosystem, making it ideal for augmenting web applications with server-side logic, API integrations, and data processing tasks in a scalable way. This guide will show you how to connect to a Neon Postgres database from your Netlify Functions project. We'll use the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver) to connect to the database and make queries. ## Prerequisites Before starting, ensure you have: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Netlify account for deploying your site with `Functions`. Sign up at [Netlify](https://netlify.com) if necessary. While Netlify can deploy directly from a GitHub repository, we'll use the `Netlify` CLI tool to deploy our project manually. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed locally for developing and deploying your Functions. ## Setting up your Neon database ### Initialize a new project After logging into the Neon Console, proceed to the [Projects](https://console.neon.tech/app/projects) section. 1. Click `New Project` to start a new one. 2. In the Neon **Dashboard**, use the `SQL Editor` from the sidebar to execute the SQL command below, creating a new table for coffee blends: ```sql CREATE TABLE favorite_coffee_blends ( id SERIAL PRIMARY KEY, name TEXT, origin TEXT, notes TEXT ); ``` Populate the table with some initial data: ```sql INSERT INTO favorite_coffee_blends (name, origin, notes) VALUES ('Morning Joy', 'Ethiopia', 'Citrus, Honey, Floral'), ('Dark Roast Delight', 'Colombia', 'Rich, Chocolate, Nutty'), ('Arabica Aroma', 'Brazil', 'Smooth, Caramel, Fruity'), ('Robusta Revolution', 'Vietnam', 'Strong, Bold, Bitter'); ``` ### Retrieve your Neon database connection string You can find your Neon database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Setting up your Netlify Functions project We'll use the Netlify CLI to create a new project and add functions to it. To install the CLI, run: ```bash npm install netlify-cli -g ``` To authenticate the CLI with your Netlify account, run: ```bash netlify login ``` This command opens a browser window to authenticate your terminal session with Netlify. After logging in, you can close the browser window and interact with your Netlify account from the terminal. ### Create a new Netlify project We will create a simple HTML webpage that fetches the coffee blends from the Neon database using a Netlify Function and displays them. To create a new `Netlify Site` project, run: ```bash mkdir neon-netlify-example && cd neon-netlify-example netlify sites:create ``` You will be prompted to select a team and site name. Choose a unique name for your site. This command then links the current directory to a `Site` project in your Netlify account. ``` ❯ netlify sites:create ? Team: Ishan Anand’s team ? Site name (leave blank for a random name; you can change it later): neon-netlify-example Site Created Admin URL: https://app.netlify.com/sites/neon-netlify-example URL: https://neon-netlify-example.netlify.app Site ID: ed43ba05-ff6e-40a9-9a68-8f58b9ad9937 Linked to neon-netlify-example ``` ### Implement the function We'll create a new function to fetch the coffee blends from the Neon database. To set up the function entrypoint script, you can run the command below and use the settings provided: ```bash ❯ netlify functions:create get_coffee_blends ? Select the type of function you'd like to create Serverless function (Node/Go/Rust) ? Select the language of your function JavaScript ? Pick a template javascript-hello-world ◈ Creating function get_coffee_blends ◈ Created ./netlify/functions/get_coffee_blends/get_coffee_blends.js Function created! ``` This command creates a new directory `netlify/functions/get_coffee_blends` with a `get_coffee_blends.js` file inside it. We are using the ES6 `import` syntax to implement the request handler, so we will change the script extension to `.mjs` for the runtime to recognize it. We also install the `Neon serverless` driver as a dependency to connect to the Neon database and fetch the data. ```bash mv netlify/functions/get_coffee_blends/get_coffee_blends.js netlify/functions/get_coffee_blends/get_coffee_blends.mjs npm install @neondatabase/serverless ``` Now, replace the contents of the function script with the following code: ```javascript // netlify/functions/get_coffee_blends/get_coffee_blends.mjs import { neon } from '@neondatabase/serverless'; export async function handler(event) { const sql = neon(process.env.DATABASE_URL); try { const rows = await sql('SELECT * FROM favorite_coffee_blends;'); return { statusCode: 200, body: JSON.stringify(rows), }; } catch (error) { return { statusCode: 500, body: JSON.stringify({ error: error.message }), }; } } ``` This function connects to your Neon database and fetches the list of your favorite coffee blends. ### Implement the frontend To make use of the `Function` implemented above, we will create a simple HTML page that fetches and displays the coffee information by calling the function. Create a new file `index.html` at the root of your project with the following content: ```html Coffee Blends

My favourite coffee blends

    ``` ### Test the site locally Set the `DATABASE_URL` environment variable in a `.env` file at the root of your project: ```text DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` We are now ready to test our Netlify site project locally. Run the following command to start a local development server: ```bash netlify dev ``` The Netlify CLI will print the local server URL where your site is running. Open the URL in your browser to see the coffee blends fetched from your Neon database. ### Deploying your Netlify Site and Function Deploying is straightforward with the Netlify CLI. However, we need to set the `DATABASE_URL` environment variable for the Netlify deployed site too. You can use the CLI to set it. ```bash netlify env:set DATABASE_URL "YOUR_NEON_CONNECTION_STRING" ``` Now, to deploy your site and function, run the following command. When asked to provide a publish directory, enter `.` to deploy the entire project. ```bash netlify deploy --prod ``` The CLI will build and deploy your site and functions to Netlify. After deployment, Netlify provides a URL for your live function. Navigate to the URL in your browser to check that the deployment was successful. ## Removing the example application and Neon project For cleanup, delete your Netlify site and functions via the Netlify dashboard or CLI. Consult the [Netlify documentation](https://docs.netlify.com/) for detailed instructions. To remove your Neon project, follow the deletion steps in Neon's documentation under [Manage Projects](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon with Netlify Functions](https://github.com/neondatabase/examples/tree/main/deploy-with-netlify-functions): Connect a Neon Postgres database to your Netlify Functions application ## Resources - [Netlify Functions](https://www.netlify.com/products/functions/) - [Netlify CLI](https://docs.netlify.com/cli/get-started/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-nextjs.txt # Connect a Next.js application to Neon > The document outlines the steps to connect a Next.js application to a Neon database, detailing configuration requirements and code examples to establish a seamless integration. ## Source - [Connect a Next.js application to Neon HTML](https://neon.com/docs/guides/nextjs): The original HTML version of this documentation Next.js by Vercel is an open-source web development framework that enables React-based web applications. This topic describes how to create a Neon project and access it from a Next.js application. To create a Neon project and access it from a Next.js application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Next.js project and add dependencies 1. Create a Next.js project if you do not have one. For instructions, see [Create a Next.js App](https://nextjs.org/docs/app/getting-started/installation), in the Vercel documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your Neon database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client There are multiple ways to make server side requests with Next.js. See below for the different implementations. ### App Router There are two methods for fetching and mutating data using server-side requests in Next.js App Router, they are: 1. `Server Components` fetches data at runtime on the server. 2. `Server Actions` functions executed on the server to perform data mutations. #### Server Components In your server components using the App Router, add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); async function getData() { const client = await pool.connect(); try { const { rows } = await client.query('SELECT version()'); return rows[0].version; } finally { client.release(); } } export default async function Page() { const data = await getData(); return <>{data}; } ``` Tab: postgres.js ```javascript import postgres from 'postgres'; async function getData() { const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return response[0].version; } export default async function Page() { const data = await getData(); return <>{data}; } ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; async function getData() { const sql = neon(process.env.DATABASE_URL); const response = await sql`SELECT version()`; return response[0].version; } export default async function Page() { const data = await getData(); return <>{data}; } ``` #### Server Actions In your server actions using the App Router, add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import { Pool } from 'pg'; export default async function Page() { async function create(formData: FormData) { "use server"; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true }); const client = await pool.connect(); await client.query("CREATE TABLE IF NOT EXISTS comments (comment TEXT)"); const comment = formData.get("comment"); await client.query("INSERT INTO comments (comment) VALUES ($1)", [comment]); } return (
    ); } ``` Tab: postgres.js ```javascript import postgres from 'postgres'; export default async function Page() { async function create(formData: FormData) { "use server"; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); await sql`CREATE TABLE IF NOT EXISTS comments (comment TEXT)`; const comment = formData.get("comment"); await sql`INSERT INTO comments (comment) VALUES (${comment})`; } return (
    ); } ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; export default async function Page() { async function create(formData: FormData) { "use server"; const sql = neon(process.env.DATABASE_URL); await sql`CREATE TABLE IF NOT EXISTS comments (comment TEXT)`; const comment = formData.get("comment"); await sql("INSERT INTO comments (comment) VALUES ($1)", [comment]); } return (
    ); } ``` ### Pages Router There are two methods for fetching data using server-side requests in Next.js Pages Router, they are: 1. `getServerSideProps` fetches data at runtime so that content is always fresh. 2. `getStaticProps` pre-renders pages at build time for data that is static or changes infrequently. #### getServerSideProps From `getServerSideProps` using the Pages Router, add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); export async function getServerSideProps() { const client = await pool.connect(); try { const response = await client.query('SELECT version()'); return { props: { data: response.rows[0].version } }; } finally { client.release(); } } export default function Page({ data }) { return <>{data}; } ``` Tab: postgres.js ```javascript import postgres from 'postgres'; export async function getServerSideProps() { const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return { props: { data: response[0].version } }; } export default function Page({ data }) { return <>{data}; } ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; export async function getServerSideProps() { const sql = neon(process.env.DATABASE_URL); const response = await sql`SELECT version()`; return { props: { data: response[0].version } }; } export default function Page({ data }) { return <>{data}; } ``` #### getStaticProps From `getStaticProps` using the Pages Router, add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); export async function getStaticProps() { const client = await pool.connect(); try { const response = await client.query('SELECT version()'); return { props: { data: response.rows[0].version } }; } finally { client.release(); } } export default function Page({ data }) { return <>{data}; } ``` Tab: postgres.js ```javascript import postgres from 'postgres'; export async function getStaticProps() { const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return { props: { data: response[0].version } }; } export default function Page({ data }) { return <>{data}; } ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; export async function getStaticProps() { const sql = neon(process.env.DATABASE_URL); const response = await sql`SELECT version()`; return { props: { data: response[0].version } }; } export default function Page({ data }) { return <>{data}; } ``` ### Serverless Functions From your Serverless Functions, add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import { Pool } from 'pg'; const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); export default async function handler(req, res) { const client = await pool.connect(); try { const { rows } = await client.query('SELECT version()'); const { version } = rows[0]; res.status(200).json({ version }); } finally { client.release(); } } ``` Tab: postgres.js ```javascript import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); export default async function handler(req, res) { const response = await sql`SELECT version()`; const { version } = response[0]; res.status(200).json({ version }); } ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); export default async function handler(req, res) { const response = await sql`SELECT version()`; const { version } = response[0]; res.status(200).json({ version }); } ``` ### Edge Functions From your Edge Functions, add the following code snippet and connect to your Neon database using the [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver): ```javascript export const config = { runtime: 'edge', }; import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); export default async function handler(req, res) { const response = await sql`SELECT version()`; const { version } = response[0]; return Response.json({ version }); } ``` ## Run the app When you run `npm run dev` you can expect to see the following on [localhost:3000](localhost:3000): ```shell PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ### Where to upload and serve files? Neon does not provide a built-in file storage service. For managing binary file data (blobs), we recommend a pattern that leverages dedicated, specialized storage services. Follow our guide on [File Storage](https://neon.com/docs/guides/file-storage) to learn more about how to store files in external object storage and file management services and track metadata in Neon. ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with Next.js Edge Functions and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-edge-functions) - [Get started with Next.js Serverless Functions and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-serverless-functions) - [Get started with Next.js getServerSideProps and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-get-server-side-props) - [Get started with Next.js getStaticProps and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-get-static-props) - [Get started with Next.js Server Actions and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-server-actions) - [Get started with Next.js Server Components and Neon](https://github.com/neondatabase/examples/tree/main/with-nextjs-server-components) --- # Source: https://neon.com/llms/guides-node.txt # Connect a Node.js application to Neon > This document guides users on connecting a Node.js application to a Neon database by detailing the necessary steps and configurations required for successful integration. ## Source - [Connect a Node.js application to Neon HTML](https://neon.com/docs/guides/node): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Node.js application. Examples are provided for using the [node-postgres](https://www.npmjs.com/package/pg) and [Postgres.js](https://www.npmjs.com/package/postgres) clients. Use the client you prefer. **Note**: The same configuration steps can be used for Express and Next.js applications. To connect to Neon from a Node.js application: ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a NodeJS project and add dependencies 1. Create a NodeJS project and change to the newly created directory. ```shell mkdir neon-nodejs-example cd neon-nodejs-example npm init -y ``` 2. Add project dependencies using one of the following commands: Tab: Neon serverless driver ```shell npm install @neondatabase/serverless dotenv ``` Tab: node-postgres ```shell npm install pg dotenv ``` Tab: postgres.js ```shell npm install postgres dotenv ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection details to it. You can find your Neon database connection details by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. Please select Node.js from the **Connection string** dropdown. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell PGHOST='[neon_hostname]' PGDATABASE='[dbname]' PGUSER='[user]' PGPASSWORD='[password]' ENDPOINT_ID='[endpoint_id]' ``` **Note**: A special `ENDPOINT_ID` variable is included in the `.env` file above. This variable can be used with older Postgres clients that do not support Server Name Indication (SNI), which Neon relies on to route incoming connections. If you are using a newer [node-postgres](https://node-postgres.com/) or [postgres.js](https://github.com/porsager/postgres) client, you won't need it. For more information, see [Endpoint ID variable](https://neon.com/docs/guides/node#endpoint-id-variable). **Important**: To ensure the security of your data, never expose your Neon credentials to the browser. ## Configure the Postgres client Add an `app.js` file to your project directory and add the following code snippet to connect to your Neon database: Tab: Neon serverless driver ```javascript require('dotenv').config(); const { neon } = require('@neondatabase/serverless'); const { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const sql = neon( `postgresql://${PGUSER}:${PGPASSWORD}@${PGHOST}/${PGDATABASE}?sslmode=require&channel_binding=require` ); async function getPgVersion() { const result = await sql`SELECT version()`; console.log(result[0]); } getPgVersion(); ``` Tab: node-postgres ```javascript require('dotenv').config(); const { Pool } = require('pg'); const { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const pool = new Pool({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: { require: true, }, }); async function getPgVersion() { const client = await pool.connect(); try { const result = await client.query('SELECT version()'); console.log(result.rows[0]); } finally { client.release(); } } getPgVersion(); ``` Tab: postgres.js ```javascript require('dotenv').config(); const postgres = require('postgres'); const { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const sql = postgres({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: 'require', }); async function getPgVersion() { const result = await sql`select version()`; console.log(result[0]); } getPgVersion(); ``` Tab 4 ```javascript require('dotenv').config(); const { Pool } = require('pg'); let { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const pool = new Pool({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: { require: true, }, }); async function getPgVersion() { const client = await pool.connect(); try { const result = await client.query('SELECT version()'); console.log(result.rows[0]); } finally { client.release(); } } getPgVersion(); ``` Tab 5 ```javascript require('dotenv').config(); const postgres = require('postgres'); let { PGHOST, PGDATABASE, PGUSER, PGPASSWORD } = process.env; const sql = postgres({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: 'require', }); async function getPgVersion() { const result = await sql`select version()`; console.log(result[0]); } getPgVersion(); ``` ## Run app.js Run `node app.js` to view the result. ```shell { version: 'PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit' } ``` ## Endpoint ID variable For older clients that do not support Server Name Indication (SNI), the `postgres.js` example below shows how to include the `ENDPOINT_ID` variable in your application's connection configuration. This is a workaround that is not required if you are using a newer [node-postgres](https://node-postgres.com/) or [postgres.js](https://github.com/porsager/postgres) client. For more information about this workaround and when it is required, see [The endpoint ID is not specified](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified) in our [connection errors](https://neon.com/docs/connect/connection-errors) documentation. ```javascript // app.js require('dotenv').config(); const postgres = require('postgres'); const { PGHOST, PGDATABASE, PGUSER, PGPASSWORD, ENDPOINT_ID } = process.env; const sql = postgres({ host: PGHOST, database: PGDATABASE, username: PGUSER, password: PGPASSWORD, port: 5432, ssl: 'require', connection: { options: `project=${ENDPOINT_ID}`, }, }); async function getPgVersion() { const result = await sql`select version()`; console.log(result); } getPgVersion(); ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Node.js and Neon](https://github.com/neondatabase/examples/tree/main/with-nodejs) ## Community resources - [Serverless Node.js Tutorial – Neon Serverless Postgres, AWS Lambda, Next.js, Vercel](https://youtu.be/cxgAN7T3rq8) --- # Source: https://neon.com/llms/guides-nuxt.txt # Connect Nuxt to Postgres on Neon > This document guides users on connecting a Nuxt application to a Postgres database hosted on Neon, detailing the necessary steps and configurations for seamless integration. ## Source - [Connect Nuxt to Postgres on Neon HTML](https://neon.com/docs/guides/nuxt): The original HTML version of this documentation [Nuxt](https://nuxt.com/) is an open-source full-stack meta framework that enables Vue-based web applications. This topic describes how to connect a Nuxt application to a Postgres database on Neon. To create a Neon project and access it from a Nuxt.js application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Nuxt project and add dependencies 1. Create a Nuxt project if you do not have one. For instructions, see [Create a Nuxt Project](https://nuxt.com/docs/getting-started/installation#new-project), in the Nuxt documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell NUXT_DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client First, make sure you load the `NUXT_DATABASE_URL` from your .env file in Nuxt's runtime configuration: In `nuxt.config.js`: ```javascript export default defineNuxtConfig({ runtimeConfig: { databaseUrl: ‘’, }, }); ``` Next, use the Neon serverless driver to create a database connection. Here's an example configuration: ```javascript import { neon } from '@neondatabase/serverless'; export default defineCachedEventHandler( async (event) => { const { databaseUrl } = useRuntimeConfig(); const db = neon(databaseUrl); const result = await db`SELECT version()`; return result; }, { maxAge: 60 * 60 * 24, // cache it for a day } ); ``` **Note**: - This example demonstrates using the Neon serverless driver to run a simple query. The `useRuntimeConfig` method accesses the `databaseUrl` set in your Nuxt runtime configuration. - Async Handling: Make sure the handler is async if you are awaiting the database query result. - Make sure `maxAge` caching fits your application's needs. In this example, it's set to cache results for a day. Adjust as necessary. ## Run the app When you run `npm run dev` you can expect to see the following on [localhost:3000](localhost:3000): ```shell PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with Nuxt and Neon](https://github.com/neondatabase/examples/tree/main/with-nuxt) --- # Source: https://neon.com/llms/guides-oauth-integration.txt # Neon OAuth integration > The Neon OAuth integration documentation outlines the steps for configuring OAuth authentication with Neon, enabling secure access and authorization for applications using Neon's database services. ## Source - [Neon OAuth integration HTML](https://neon.com/docs/guides/oauth-integration): The original HTML version of this documentation You can integrate your application or service with Neon using OAuth. The Neon OAuth integration enables your application to interact with Neon user accounts, carrying out permitted actions on their behalf. Our integration does not require direct access to user login credentials and is conducted with their approval, ensuring data privacy and security. To set up the integration and create a Neon OAuth application, you can apply on our [Partners page](https://neon.com/partners). You will need to provide the following information: - Your name and email address (this should be an individual email address, not a shared inbox address) - Your company name - Details about your application, including your application name, what it does, and a link to the website. - Callback URL(s), which are used to redirect users after completing the authorization flow. ```text https://app.company.com/api/integrations/neon/callback https://app.stage.company.com/api/integrations/neon/callback http://localhost:3000/api/integrations/neon/callback ``` - Required scopes, defining the type of access you need. We provide scopes for managing both projects and organizations. For a list of all available scopes, see [Supported OAuth Scopes](https://neon.com/docs/guides/oauth-integration#supported-oauth-scopes). - Whether or not you will make API calls from a backend. - A logo to be displayed on Neon's OAuth consent dialog when users authorize your application to access their Neon account. After your application is reviewed, Neon will provide you with a **client ID** and, if applicable, a **client secret**. Client secrets are only provided for backend clients, so non-backend applications (e.g. browser-based apps or CLI tools) will not receive a secret. These credentials are sensitive and should be stored securely. ## How the OAuth integration works Here is a high-level overview of how Neon's OAuth implementation works: 1. The user sends a request to your API endpoint to initiate the OAuth flow by clicking a button or link in your application. 2. An authorization URL is generated. 3. The user is redirected to Neon's OAuth consent screen to authorize the application. 4. The user logs in and authorizes the application, granting it the necessary permissions. 5. A redirect is performed to a callback endpoint, which includes an access token that allows the application to manage Neon resources on the user's behalf. ## About the Neon OAuth server The Neon OAuth server implements the OpenID Connect protocol and supports [OpenID Connect Discovery specification](https://openid.net/specs/openid-connect-discovery-1_0.html). The server metadata is published at the following well-known URL: [https://oauth2.neon.tech/.well-known/openid-configuration](https://oauth2.neon.tech/.well-known/openid-configuration). Here is an example response: ```json { "issuer": "https://oauth2.neon.tech/", "authorization_endpoint": "https://oauth2.neon.tech/oauth2/auth", "token_endpoint": "https://oauth2.neon.tech/oauth2/token", "jwks_uri": "https://oauth2.neon.tech/.well-known/jwks.json", "subject_types_supported": ["public"], "response_types_supported": [ "code", "code id_token", "id_token", "token id_token", "token", "token id_token code" ], "claims_supported": ["sub"], "grant_types_supported": [ "authorization_code", "implicit", "client_credentials", "refresh_token" ], "response_modes_supported": ["query", "fragment"], "userinfo_endpoint": "https://oauth2.neon.tech/userinfo", "scopes_supported": ["offline_access", "offline", "openid"], "token_endpoint_auth_methods_supported": [ "client_secret_post", "client_secret_basic", "private_key_jwt", "none" ], "userinfo_signing_alg_values_supported": ["none", "RS256"], "id_token_signing_alg_values_supported": ["RS256"], "request_parameter_supported": true, "request_uri_parameter_supported": true, "require_request_uri_registration": true, "claims_parameter_supported": false, "revocation_endpoint": "https://oauth2.neon.tech/oauth2/revoke", "backchannel_logout_supported": true, "backchannel_logout_session_supported": true, "frontchannel_logout_supported": true, "frontchannel_logout_session_supported": true, "end_session_endpoint": "https://oauth2.neon.tech/oauth2/sessions/logout", "request_object_signing_alg_values_supported": ["RS256", "none"], "code_challenge_methods_supported": ["plain", "S256"] } ``` **Note**: You must add `offline` and `offline_access` scopes to your request to receive the `refresh_token`. Depending on the OpenID client you're using, you might not need to explicitly interact with the API endpoints listed below. OAuth 2.0 clients typically handle this interaction automatically. For example, the [Neon CLI](https://neon.com/docs/reference/neon-cli), written in Typescript, interacts with the API endpoints automatically to retrieve the `refresh_token` and `access_token`. For an example, refer to this part of the Neon CLI [source code](https://github.com/neondatabase/neonctl/blob/3764c5d5675197ef9bc7ed78d5531bd318f7f13b/src/auth.ts#L63-L81). In this example, the `oauthHost` is `https://oauth2.neon.tech`. ## Supported OAuth Scopes The following OAuth scopes allow varying degrees of access to Neon resources: | **Project scopes** | **Scope Name** | | :----------------- | :---------------------------------- | | Create Projects | `urn:neoncloud:projects:create` | | Read Projects | `urn:neoncloud:projects:read` | | Modify Projects | `urn:neoncloud:projects:update` | | Delete Projects | `urn:neoncloud:projects:delete` | | Manage Projects | `urn:neoncloud:projects:permission` | | **Organization scopes** | **Scope Name** | | :------------------------------ | :------------------------------ | | Create Organizations | `urn:neoncloud:orgs:create` | | Read Organizations | `urn:neoncloud:orgs:read` | | Update Organizations | `urn:neoncloud:orgs:update` | | Delete Organizations | `urn:neoncloud:orgs:delete` | | Manage Organization Permissions | `urn:neoncloud:orgs:permission` | You must choose from these predefined scopes when requesting access; custom scopes are not supported. Let's now go through the full flow, step by step: ## Initiating the OAuth flow To initiate the OAuth flow, you need to generate an authorization URL. You can do that by directing your users to `https://oauth2.neon.tech/oauth2/auth` while passing the following query parameters: - `client_id`: your OAuth application's ID (provided to you by Neon after your application is received) - `redirect_uri`: the full URL that Neon should redirect users to after authorizing your application. The URL should match at least one of the callback URLs you provided when applying to become a partner. - `scope`: This is a space-separated list of predefined scopes that define the level of access you want to request. For a full list of supported scopes and their meanings, see the [Supported OAuth Scopes](https://neon.com/docs/guides/oauth-integration#supported-oauth-scopes) section. **Example:** ```text urn:neoncloud:projects:create urn:neoncloud:projects:read urn:neoncloud:projects:update urn:neoncloud:projects:delete urn:neoncloud:orgs:read ``` - `response_type`: This should be set to `code` to indicate that you are using the [Authorization Code grant type](https://oauth.net/2/grant-types/authorization-code/). - `code_challenge`: This is a random string that is used to verify the integrity of the authorization code. - `state`: This is a random string that is returned to your callback URL. You can use this parameter to verify that the request came from your application and not from a third party. ## Authorization URL Here is an example of what the authorization URL might look like: ```text https://oauth2.neon.tech/oauth2/auth?client_id=neon-experimental&scope=openid%20offline%20offline_access%20urn%3Aneoncloud%3Aprojects%3Acreate%20urn%3Aneoncloud%3Aprojects%3Aread%20urn%3Aneoncloud%3Aprojects%3Aupdate%20urn%3Aneoncloud%3Aprojects%3Adelete&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A3000%2Fapi%2Fauth%2Fcallback%2Fneon&grant_type=authorization_code&state=H58y-rSTebc3QmNbRjNTX9dL73-IyoU2T_WNievO9as&code_challenge=99XcbwOFU6iEsvXr77Xxwsk9I0GL4c4c4Q8yPIVrF_0&code_challenge_method=S256 ``` ## OAuth consent screen After being redirected to the authorization URL, the user is presented with Neon's OAuth consent screen, which is pre-populated with the scopes you requested. From the consent screen, the user is able to review the scopes and authorize the application to connect their Neon account. **Note**: The [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) provides a [Get current user details](https://api-docs.neon.tech/reference/getcurrentuserinfo) endpoint for retrieving information about the currently authorized Neon user. ## Authorization code is returned to your callback URL After successfully completing the authorization flow, the user is redirected to the callback URL with the following query parameters appended to the URL: - `code`: an authorization code that will be exchanged for an access token - `scope`: the scopes that the user authorized your application to access - `state`: you can compare the value of this parameter with the original `state` you provided in the previous step to ensure that the request came from your application and not from a third party ## Exchanging the authorization code for an access token You can now exchange the authorization code returned from the previous step for an access token. To do that, you need to send a `POST` request to `https://oauth2.neon.tech/oauth2/token` with the following parameters: - `client_id`: your OAuth application's ID. - `redirect_uri`: the full URL that Neon should redirect users to after authorizing your application. The URL should match at least one of the callback URLs you provided when applying to become a partner. - `client_secret`: your OAuth application's secret - `grant_type`: set this to `authorization_code` to indicate that you are using the [Authorization Code grant type](https://oauth.net/2/grant-types/authorization-code/) - `code`: the authorization code returned from the previous step The response object includes an `access_token` value, required for making requests to the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) on your users' behalf. This value must be supplied in the Authorization header of the HTTP request when sending requests to the Neon API. ## Example OAuth applications For an example application that leverages the Neon OAuth integration, see the [Visualizing Neon Database Branches](https://neon-experimental.vercel.app) application. You can find the application code on GitHub. - [Neon Branches Visualizer](https://github.com/neondatabase/neon-branches-visualizer): A Neon branching visualizer app showcasing how to build an OAuth integration with Neon --- # Source: https://neon.com/llms/guides-opentelemetry.txt # OpenTelemetry > The OpenTelemetry integration documentation for Neon outlines the steps to implement and configure OpenTelemetry for monitoring and tracing within Neon environments, detailing setup procedures and configuration options specific to Neon's infrastructure. ## Source - [OpenTelemetry HTML](https://neon.com/docs/guides/opentelemetry): The original HTML version of this documentation **Note** Beta: **OpenTelemetry integration** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). What you will learn: - How to configure OpenTelemetry exports from Neon - Setup with Grafana OSS and Tempo - Integration with Grafana Cloud - Example config using New Relic External docs: - [OpenTelemetry Protocol (OTLP) Specification](https://opentelemetry.io/docs/specs/otlp/) - [Grafana Labs OpenTelemetry Documentation](https://grafana.com/docs/opentelemetry/) - [New Relic OpenTelemetry guide](https://docs.newrelic.com/docs/opentelemetry/best-practices/opentelemetry-otlp/) Available for Scale plan users, the Neon OpenTelemetry integration lets you export metrics and Postgres logs to any OpenTelemetry Protocol (OTLP) compatible observability platform. This gives you the flexibility to send your Neon data to your preferred monitoring solution, whether that's New Relic, Grafana Cloud, Honeycomb, or any other OTEL-compatible service. ## How it works The integration uses the OpenTelemetry Protocol (OTLP) to securely transmit Neon metrics and Postgres logs to your chosen destination. By configuring the integration with your OTEL endpoint and authentication credentials, Neon automatically sends data from all computes in your project. **Note**: Data is sent for all computes in your Neon project. For example, if you have multiple branches, each with an attached compute, both metrics and logs will be collected and sent for each compute. ### Neon metrics The integration exports a comprehensive set of metrics including: - **Connection counts** — Tracks active and idle database connections. - **Database size** — Monitors total size of all databases in bytes. - **Replication delay** — Measures replication lag in bytes and seconds. - **Compute metrics** — Includes CPU and memory usage statistics for your compute. ### Postgres logs **Note** Beta: **Postgres logs export** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). The Neon OpenTelemetry integration can forward Postgres logs to your destination platform. These logs provide visibility into database activity, errors, and performance. ## Prerequisites Before getting started, ensure the following: - You have a Neon account and project. If not, see [Sign up for a Neon account](https://neon.com/docs/get-started/signing-up). - You have an OpenTelemetry-compatible observability platform account and know your OTLP endpoint URL and authentication credentials (API key, bearer token, or basic auth). ## Set up your observability platform Choose your preferred observability platform and follow the setup instructions: ### Grafana OSS with Docker OTEL LGTM For a cost-effective, open-source monitoring stack, you can set up the complete LGTM stack (Loki for logs, Grafana for visualization, Tempo for traces, and Mimir for metrics) with OpenTelemetry integration. **Quick setup with Docker:** 1. **Clone and start the stack**: ```bash git clone https://github.com/grafana/docker-otel-lgtm.git cd docker-otel-lgtm docker compose up -d ``` This provides: - **Grafana** at http://localhost:3000 (admin/admin) - **OpenTelemetry Collector** at http://localhost:4318 (HTTP) and localhost:4317 (gRPC) - **Prometheus/Mimir**, **Loki**, and **Tempo** for complete observability For detailed configuration, see the [docker-otel-lgtm documentation](https://github.com/grafana/docker-otel-lgtm). ### Grafana Cloud For a fully managed solution, see the dedicated [Grafana Cloud integration guide](https://neon.com/docs/guides/grafana-cloud). ### New Relic If you use New Relic, you'll need to sign up for an account and get your license key. 1. Sign up for a free account at [newrelic.com](https://newrelic.com) if you haven't already. 2. Once signed in, you'll need your New Relic license key for authentication. If you're onboarding for the first time, copy the license key when it's offered to you (this is your **Original account** key). **Tip**: If you get stuck in New Relic's onboarding screens and don't see a way to proceed, try opening the Logs or Data Explorer pages in a new browser tab. This can sometimes let you access the main New Relic UI and continue with your setup. If you missed copying your license key during onboarding, you can create a new one: choose **Ingest - License** as the type. Details: Create New Relic license key 1. Click on your user menu in the bottom left corner. 2. Select **API Keys** from the menu. 3. Click **Create a key** → choose **Ingest - License**. Copy the key immediately (you can't view it again later). Your license key will look something like `eu01xxaa1234567890abcdef1234567890NRAL` (the format varies by region). ## Open the OpenTelemetry integration 1. In the Neon Console, navigate to the **Integrations** page in your Neon project. 2. Locate the **OpenTelemetry** card and click **Add**. The sidebar form opens for you to enter your platform's details. ## Select data to export Choose what data you want to export (at the top of the form): - **Metrics**: System metrics and database statistics (CPU, memory, connections, etc.) - **Postgres logs**: Error messages, warnings, connection events, and system notifications You can enable either or both options based on your monitoring needs. ## Configure the connection 1. **Select your connection protocol**: For most platforms, choose **HTTP** (recommended), which uses HTTP/2 for efficient data transmission. Some environments may require **gRPC** instead. 2. **Enter your Endpoint URL** based on your platform: **For Grafana OSS (docker-otel-lgtm)**: - `http://localhost:4318` (if running locally) - `http://your-server-ip:4318` (if running on a remote server) **For Grafana Cloud**: - Use your Grafana Cloud OTLP endpoint (typically `https://otlp-gateway-{region}.grafana.net/otlp`) - See the [Grafana Cloud guide](https://neon.com/docs/guides/grafana-cloud) for details **For New Relic**: - US: `https://otlp.nr-data.net` - Europe: `https://otlp.eu01.nr-data.net` - See [New Relic's endpoint documentation](https://docs.newrelic.com/docs/opentelemetry/best-practices/opentelemetry-otlp/#configure-endpoint-port-protocol) for other regions **Note**: When you configure an OTLP endpoint URL in Neon, you should provide only the **base URL** of your collector or observability platform. The OpenTelemetry Collector automatically appends the correct signal-specific paths: - `/v1/metrics` for metrics - `/v1/logs` for logs - `/v1/traces` for traces For example, if you enter: ``` https://dev-thanos-receive.example.com/api/v1/otlp ``` the Collector will send requests to: ``` https://dev-thanos-receive.example.com/api/v1/otlp/v1/metrics ``` and ``` https://dev-thanos-receive.example.com/api/v1/otlp/v1/logs ``` Make sure your observability platform is configured to receive data at these appended paths. If your platform expects data directly at the base URL without suffixes, you may need to adjust the configuration on that side. 3. Configure authentication: **For Grafana OSS**: - No authentication required for local docker setup **For Grafana Cloud**: - Use **Basic** authentication with Base64 encoded `:` - Get your instance ID and API token from the Grafana Cloud Portal by clicking on the **OpenTelemetry** card **For New Relic**: - Use **Bearer** or **API Key** with your New Relic license key **For other platforms**: Choose the appropriate method: - **Bearer**: Enter your bearer token or API key - **Basic**: Provide your username and password credentials - **API Key**: Enter your API key ## Configure resource attributes Neon automatically organizes your data into separate service entities: your configured service name will receive Postgres logs, while metrics are split into `compute-host-metrics` (infrastructure metrics) and `sql-metrics` (database metrics). 1. In the **Resource** section, configure the `service.name` attribute to identify your Neon project in your observability platform. For example, you might use "neon-postgres-test" or your actual project name. 2. Optionally, you can add additional resource attributes by providing a value in the second field to further categorize or filter your data in your observability platform. ## Complete the setup Click **Add** to save your configuration and start the data export. ## Verify your integration is working Your Neon data should start appearing in your observability platform within a few minutes. ### For Grafana (OSS and Cloud) users 1. **Access Grafana Explore**: Visit `http://localhost:3000` (admin/admin) or your Grafana Cloud Instance and navigate to **Explore** 2. **Check metrics**: Select your **Prometheus** data source and use this query to check if data is flowing:: ```promql neon_connection_counts ``` 3. **Create dashboards**: You can visualize using [Grafana Drilldown apps](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/) or use the dashboard JSON from [Grafana Cloud Documentation](https://neon.com/docs/guides/grafana-cloud#create-a-monitoring-dashboard) ### For New Relic users Use these queries to check if data is flowing: ```sql FROM Metric SELECT * SINCE 1 hour ago FROM Log SELECT * SINCE 1 hour ago ``` **Success looks similar to this:** _Metrics flowing into New Relic_ _Postgres logs flowing into New Relic_ **Find your data under APM & Services** - **Logs**: Check your configured service name in APM & Services (e.g., `neon-postgres-test`) - **Metrics**: Look for the auto-created `compute-host-metrics` and `sql-metrics` services **Note**: You can modify these settings later by editing your integration configuration from the **Integrations** page. **Note**: Neon computes only send logs and metrics when they are active. If the [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature is enabled and a compute is suspended due to inactivity, no logs or metrics will be sent during the suspension. This may result in gaps in your data. If you notice missing data, check if your compute is suspended. You can verify a compute's status as `Idle` or `Active` on the **Branches** page in the Neon console, and review **Suspend compute** events on the **System operations** tab of the **Monitoring** page. Additionally, if you are setting up the OpenTelemetry integration for a project with an inactive compute, you'll need to activate the compute before it can send data. To activate it, simply run a query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any connected client. ## Troubleshooting - **Data isn't appearing in your observability platform** 1. **Verify your endpoint URL** - Ensure the OTLP endpoint URL is correct for your platform. 2. **Check authentication** - Verify that your API key, bearer token, or credentials are valid and have the necessary permissions. 3. **Confirm compute activity** - Make sure your Neon compute is active and running queries. 4. **Review platform-specific requirements** - Some platforms may have specific configuration requirements for OTLP data ingestion. - **404 errors on OTLP endpoint** If you see errors like the following in your logs: ```bash Exporting failed. Dropping data. {"error": "not retryable error: Permanent error: rpc error: code = Unimplemented desc = error exporting items, request to https://example.com/otlp/v1/metrics responded with HTTP Status Code 404"} ``` This usually means your observability platform is not accepting data on the signal-specific paths automatically appended by the OpenTelemetry Collector. The Collector appends these suffixes to the base URL you configure: - `/v1/metrics` for metrics - `/v1/logs` for logs - `/v1/traces` for traces **How to fix:** - Double-check that your platform supports OTLP ingestion on these paths. - If your platform expects data directly at the base URL without suffixes, you may need to change its configuration or use a compatible OTLP gateway. ## Available metrics For a complete list of all metrics and log fields exported by Neon, see the [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs). --- # Source: https://neon.com/llms/guides-outerbase.txt # Connect Outerbase to Neon > The document details the steps required to connect Outerbase to a Neon database, including configuration instructions and necessary prerequisites for seamless integration. ## Source - [Connect Outerbase to Neon HTML](https://neon.com/docs/guides/outerbase): The original HTML version of this documentation **Important** deprecated: Outerbase was [acquired by Cloudflare](https://blog.cloudflare.com/cloudflare-acquires-outerbase-database-dx/) and will shut down on October 15, 2025. This guide is deprecated and will be removed in a future update. Outerbase is an AI-powered interface for your database that allows you and your team to view, query, visualize, and edit your data using the power of AI. Outserbase supports both SQL and natural language. To learn more, see [What is Outerbase?](https://docs.outerbase.com/introduction/what-is-outerbase) ## Prerequisites The setup described below assumes that you have a Neon account and project. If not, see [Sign up for a Neon account](https://neon.com/docs/get-started/signing-up). An Outerbase account is also required, but if you do not have one, you can set one up when adding the integration. ## Add the Outerbase integration To add the Outerbase integration to your Neon project: 1. In the Neon Console, navigate to the **Integrations** page for your project. 2. Locate the **Outerbase** integration card and click **Add Outerbase**. 3. On the **Log in to Outerbase** dialog, login with your Outerbase account or create an account if you do not have one. You can also sign in with a Google account. 4. Step through the Outerbase onboarding pages by selecting from the provided options. 5. When you reach the **How would you like to get started** page, select the **Connect a database** option. 6. On the **Create a base** page, select **Neon** from the **Connect to your cloud provider** section of the page. 7. You are directed to an **Authorize Outerbase** dialog. Click **Authorize** to give Outerbase permission to access your Neon account. 8. You are directed to a **Connect to your Neon database** page. If you have more than one Neon project, select the project you want to connect to from the **Select a database** drop-down menu. **Note**: If you use Neon's [IP Allow](https://neon.com/docs/introduction/ip-allow) feature, be sure to copy the provided Outerbase IP addresses from this page and add them to your Neon IP allowlist. See [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow) for instructions. IP Allow is a Neon [Business](https://neon.com/docs/introduction/plans#business) plan feature. 9. Select **Connect to Database**. **Important**: Wait for a moment for the connection to be established. The **Connect to Database** button will change to a **Go to base** button when the connection is available. 10. Click **Go to base** to finish the setup. You are taken to Outerbase's **Get Started tour** where you are guided through the basics of working with Outerbase. For information about the tour, see [Get started with Outerbase](https://docs.outerbase.com/introduction/get-started). For a conceptual overview of Outerbase, see [Outerbase concepts](https://docs.outerbase.com/introduction/concepts). ## Outerbase support For Outerbase support and additional resources, refer to [Outerbase Community & Support](https://docs.outerbase.com/introduction/community-support). ## Remove the Outerbase integration To remove the Outerbase integration: 1. In the Neon Console, navigate to the **Integrations** page for your project. 2. Locate the Outerbase integration and click **Manage** to open the **Outerbase integration** drawer. 3. Click **Disconnect**. 4. Click **Remove integration** to confirm your choice. ## Feedback and future improvements If you've got feature requests or feedback about what you'd like to see from Neon's Outerbase integration, let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/guides-phoenix.txt # Connect from Phoenix to Neon > This document details the steps for establishing a connection between the Phoenix framework and a Neon database, including configuration settings and necessary code snippets specific to Neon's environment. ## Source - [Connect from Phoenix to Neon HTML](https://neon.com/docs/guides/phoenix): The original HTML version of this documentation This guide describes how to connect Neon in a [Phoenix](https://www.phoenixframework.org) application. [Ecto](https://hexdocs.pm/ecto/3.11.2/Ecto.html) provides an API and abstractions for interacting databases, enabling Elixir developers to query any database using similar constructs. It is assumed that you have a working installation of [Elixir](https://elixir-lang.org/install.html). To connect to Neon from Phoenix with Ecto: - [Create a Neon project](https://neon.com/docs/guides/phoenix#create-a-neon-project) - [Store your Neon credentials](https://neon.com/docs/guides/phoenix#store-your-neon-credentials) - [Create a Phoenix project](https://neon.com/docs/guides/phoenix#create-a-phoenix-project) - [Build and Run the Phoenix application](https://neon.com/docs/guides/phoenix#build-and-run-the-phoenix-application) ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` You will need the connection string details later in the setup. ## Create a Phoenix project 1. [Create a Phoenix project](https://hexdocs.pm/phoenix/installation.html#phoenix) if you do not have one, with the following command: ```bash # install phx.new if you haven't already # mix archive.install hex phx_new mix phx.new hello ``` When prompted to, choose to not install the dependencies. 2. Update `config/dev.exs` file's configuration with your Neon database connection details. Use the connection details from the Neon connection string you copied previously. ```elixir {2-5,9} config :hello, Hello.Repo, username: "neondb_owner", password: "JngqXejzvb93", hostname: "ep-rough-snowflake-a5j76tr5.us-east-2.aws.neon.tech", database: "neondb", stacktrace: true, show_sensitive_data_on_connection_error: true, pool_size: 10, ssl: [cacerts: :public_key.cacerts_get()] ``` **Note**: The `:ssl` option is required to connect to Neon. Postgrex, since v0.18, verifies the server SSL certificate and you need to select CA trust store using `:cacerts` or `:cacertfile` options. You can use the OS-provided CA store by setting `cacerts: :public_key.cacerts_get()`. While not recommended, you can disable certificate verification by setting `ssl: [verify: :verify_none]`. 3. Update`config/runtime.exs` file's configuration with your Neon database connection details. Use the connection details from the Neon connection string you copied previously. ```elixir {2} config :hello, Hello.Repo, ssl: [cacerts: :public_key.cacerts_get()], url: database_url, pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"), socket_options: maybe_ipv6 ``` 4. Update`config/test.exs` file's configuration with your Neon database connection details. Use the connection details from the Neon connection string you copied in the first part of the guide. ```elixir {2,3,4,8} config :hello, Hello.Repo, username: "neondb_owner", password: "JngqXejzvb93", hostname: "ep-rough-snowflake-a5j76tr5.us-east-2.aws.neon.tech", database: "with_phoenix_test#{System.get_env("MIX_TEST_PARTITION")}", pool: Ecto.Adapters.SQL.Sandbox, pool_size: System.schedulers_online() * 2, ssl: [cacerts: :public_key.cacerts_get()] ``` 5. Now, install the dependencies used in your Phoenix application using the following command: ```bash mix deps.get ``` 6. Seed the Neon database with the following command: ```bash mix ecto.create ``` Once that's done, move on to building and running the application in production mode. ## Build and Run the Phoenix application To compile the app in production mode, run the following command: ```bash MIX_ENV=prod mix compile ``` To compile assets for the production mode, run the following command: ```bash MIX_ENV=prod mix assets.deploy ``` For each deployment, a secret key is required for encrypting and signing data. Run the following command to generate the key: ```bash mix phx.gen.secret ``` When you run the following command, you can expect to see the Phoenix application on [localhost:4001](localhost:4001): ```bash PORT=4001 \ MIX_ENV=prod \ DATABASE_URL="postgresql://...:...@...aws.neon.tech/neondb?sslmode=require&channel_binding=require" \ SECRET_KEY_BASE=".../..." \ mix phx.server ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Phoenix and Neon](https://github.com/neondatabase/examples/tree/main/with_phoenix) --- # Source: https://neon.com/llms/guides-platform-integration-get-started.txt # Get started with your integration > The document "Get started with your integration" guides Neon users through the initial steps of integrating their applications with the Neon platform, detailing necessary configurations and setup procedures. ## Source - [Get started with your integration HTML](https://neon.com/docs/guides/platform-integration-get-started): The original HTML version of this documentation This guide outlines the steps to integrate Neon into your platform, enabling you to offer managed Postgres databases to your users. Whether you're developing a SaaS product, AI agent, enterprise platform, or something else entirely, this guide walks you through what's involved in setting up, configuring, and managing your Neon integration. **Tip** key considerations for a successful integration: Before you start building your integration, be sure to read [Key considerations for a successful integration](https://neon.com/docs/guides/platform-integration-get-started#key-considerations-for-a-successful-integration). ## Set up your integration Neon provides flexible options for integrating Postgres into your platform. We support the following integration options: - **OAuth**: Allows your application to interact with user accounts and perform authorized actions on their behalf. With OAuth, there's no need for direct access to user login credentials, and users can grant permissions on a variety of supported OAuth scopes. For details, see the [Neon OAuth Integration Guide](https://neon.com/docs/guides/oauth-integration), and check out the [OAuth sample app](https://github.com/neondatabase/neon-branches-visualizer) to see how its done. - **Neon API**: Use our API to interact with the Neon platform directly. It enables `POST`, `GET`, `PATCH`, and `DELETE` operations on Neon objects such as projects, branches, databases, roles, and more. To explore available endpoints and try them from your browser, visit our [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). - **@neondatabase/toolkit for AI Agents**: If you're building an AI agent, the [@neondatabase/toolkit](https://github.com/neondatabase/toolkit) ([@neon/toolkit](https://jsr.io/@neon/toolkit) on JSR) lets you spin up a Postgres database in seconds and run SQL queries. It includes both the [Neon API Client](https://www.npmjs.com/package/@neondatabase/api-client) and the [Neon serverless driver](https://github.com/neondatabase/serverless), making it an excellent choice for AI agents that need to set up an SQL database quickly. [Learn more](https://neon.com/blog/why-neondatabase-toolkit). ## Configure limits To ensure you have control over usage and costs, Neon's APIs let you configure limits and monitor usage, enabling billing features, such as: - **Usage limits**: Define limits on consumption metrics like **storage**, **compute time**, and **data transfer**. - **Pricing Plans**: Create different pricing plans for your platform or service. For example, you can set limits on consumption metrics to define your own Free, Pro, and Enterprise plans: - **storage**: Define maximum allowed storage for each plan. - **compute time**: Cap CPU usage based on the plan your customers choose. - **data transfer**: Set limits for data transfer (egress) on each usage plan. **Tip** platform integration example: For an example of how one integrator of Neon defined usage limits based on _database instance types_, see [Koyeb Database Instance Types](https://www.koyeb.com/docs/databases#database-instance-types). You will see limits defined on compute size, compute time, stored data, written data, and egress. As your users upgrade or change their plans, you can dynamically modify their limits using the Neon API. This allows for real-time updates without affecting database uptime or user experience. To learn more about setting limits, see [Configure consumption limits](https://neon.com/docs/guides/platform-integration-get-started#/docs/guides/consumption-limits). ## Monitor usage Using Neon's consumption APIs, you can query a range of account and project-level metrics to monitor usage. For example, you can: - Query the total usage across all projects, providing a comprehensive view of usage for the billing period or a specific time range spanning multiple billing periods. - Get daily, hourly, or monthly metrics across a selected time period, broken out for each individual project. - Get usage metrics for individual projects. To learn how, see [Querying consumption metrics with the API](https://neon.com/docs/guides/consumption-metrics). ## Key considerations for a successful integration - **Use a project-per-user model**: When setting up your integration, we recommend a **project-per-user** model rather than branch-per-user or database-per-user models. **What do we mean by project-per-user?** In Neon, resources such as branches, databases, roles, and computes are organized within a Neon project. When a user signs up with Neon, they start by creating a project, which includes a default branch, database, role, and compute instance. We recommend the same approach for your integration. You can learn more about Neon's project-based structure here: [Neon object hierarchy](https://neon.com/docs/manage/overview). **Why we recommend the project-per-user model**: - Neon uses a project-based structure for resource management; it's easier to follow this established, underlying model. - A project-per-user structure isolates resources and data, making it easier to manage limits and billing. - Isolation of resources and data by project helps protect against accidental data exposure among users caused by misconfigurations or privilege management errors. This approach also simplifies compliance with privacy standards like GDPR. - Isolation of resources and data by project ensures that one user's usage patterns or actions do not impact other users on your platform or service. For example, each user has dedicated compute resources, so a heavy load in one user's project will not affect others. - In Neon, databases reside on a branch, and certain operations, such as instant restore, are performed at the branch level. In a user-per-database implementation, a restore operation would impact every database on that branch. However, in a project-based structure, branch-level actions like instant restore can be isolated to a single user. - **Carefully consider limits**: When setting limits for your users, aim to strike the right balance between cost management and user flexibility. For reference, you can review how Neon defines its [pricing plans](https://neon.com/docs/introduction/plans) or how integrators like Koyeb set [usage limits](https://www.koyeb.com/docs/databases#database-instance-types). Keep in mind that when users reach their defined limits, their compute resources may be suspended, preventing further interaction with the database. Consider what should happen when a user reaches these limits. Do you want to implement advanced notifications? Should there be an upgrade path? - **Autoscaling and Scale to Zero**: Consider [autoscaling](https://neon.com/docs/introduction/autoscaling) limits and [sale to zero](https://neon.com/docs/introduction/scale-to-zero) settings for the compute instances you create for customers. Do you want to allow compute resources to scale on demand? How quickly should computes scale to zero when inactive? For more details, see [Other consumption-related settings](https://neon.com/docs/guides/consumption-limits#other-consumption-related-settings). - **Connection limits**: Be aware of the connection limits associated with each Neon compute size, and remember that connection pooling allows for more concurrent connections. For more information, see [Connection limits](https://neon.com/docs/connect/connection-pooling#connection-limits-without-connection-pooling). - **Polling consumption data for usage reporting and billing**: Refer to our [Consumption polling FAQ](https://neon.com/docs/guides/consumption-metrics#consumption-polling-faq). - **Custom names for roles and databases**: When creating projects using the [Create project API](https://api-docs.neon.tech/reference/createproject), you can customize the default role and database names. - **Reserved names for roles and databases**: Neon reserves certain names for Postgres roles and databases. Users will not be able to use these reserved names when creating roles and databases. For more information, see [Reserved role names](https://neon.com/docs/manage/roles#reserved-role-names) and [Reserved database names](https://neon.com/docs/manage/databases#reserved-database-names). - **Postgres extension support**: We frequently receive questions about the Postgres extensions supported by Neon. See the list of [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions) that Neon currently supports. - **Staying up to date with changes to the Neon platform**: We make every effort to proactively and directly inform integrators of Neon about updates and changes that could impact their business. In addition, you can monitor the following sources for information about the latest updates from Neon: - The [Neon Roadmap](https://neon.com/docs/introduction/roadmap) to see recent deliveries and upcoming features. - The [Neon Changelog](https://neon.com/docs/changelog) for the latest product updates. - The [Neon Newsletter](https://neon.com/blog#subscribe-form) sent weekly. - The [Neon Blog](https://neon.com/blog) - The [Neon Status Page](https://neonstatus.com/) for platform status across regions. - [RSS Feeds](https://neon.com/docs/reference/feeds) for all of the above, which can be added to your Slack channels. ## Integration support We're here to support you through every step of your integration. If you have any questions, feel free to reach out to our [Support team](https://neon.com/docs/introduction/support). If you've set up an integration arrangement with Neon, you can also contact your Neon representative. --- # Source: https://neon.com/llms/guides-platform-integration-intro.txt # Build on Neon > The Platform Integration Guide outlines the steps for integrating Neon with various platforms, detailing configuration processes and compatibility requirements specific to Neon's database services. ## Source - [Build on Neon HTML](https://neon.com/docs/guides/platform-integration-intro): The original HTML version of this documentation Learn how you can offer instant, managed Postgres databases to your users with Neon. This guide covers how to integrate your platform or service with Neon, set usage limits for your users, and more. ## Platform integration with Neon Learn about the benefits of integrating with Neon and how to set up your platform integration. - [Platform integration page](https://neon.com/platforms): Read about the benefits of integrating with Neon - [Postgres for AI agent platforms](https://neon.com/use-cases/ai-agents): Learn how agents and codegen platforms integrate Neon as their database backend - [Meet with our team](https://neon.com/contact-sales): Request a meeting with our team to learn more ## Integrate with Neon Find out how you can integrate with Neon. - [Get started](https://neon.com/docs/guides/platform-integration-get-started): Learn the essentials for integrating with Neon - [Neon API](https://neon.com/docs/reference/api-reference): Integrate using the Neon API - [OAuth](https://neon.com/docs/guides/oauth-integration): Integrate with Neon using OAuth - [Sample OAuth app](https://github.com/neondatabase/neon-branches-visualizer): Check out a sample OAuth application - [Claimable database](https://neon.com/docs/workflows/claimable-database-integration): Manage Neon projects for users with the database claim API ## AI agent and codegen platforms Create autonomous agents that can manage and interact with your Neon databases programmatically. For more on this use case, see [Neon for AI Agent Platforms](https://neon.com/use-cases/ai-agents). - [Toolkit for AI Agents](https://github.com/neondatabase/toolkit): Spin up a Postgres database in seconds - [Database versioning](https://neon.com/docs/ai/ai-database-versioning): How AI agents and codegen platforms use Neon snapshot APIs for database versioning ## Billing Learn how to set limits for your customers and track usage. - [Configure consumption limits](https://neon.com/docs/guides/consumption-limits): Use the Neon API to set consumption limits for your customers - [Query consumption metrics](https://neon.com/docs/guides/consumption-metrics): Track usage with Neon's consumption metrics APIs --- # Source: https://neon.com/llms/guides-postgrest.txt # Create a REST API from Postgres with PostgREST > The document guides Neon users on creating a REST API from a PostgreSQL database using PostgREST, detailing the setup process and configuration steps specific to Neon's environment. ## Source - [Create a REST API from Postgres with PostgREST HTML](https://neon.com/docs/guides/postgrest): The original HTML version of this documentation What you will learn: - What is PostgREST and how it works - Setting up a Neon project for PostgREST - Running PostgREST with Docker - Adding authentication with JWT - Implementing Row-Level Security Related resources: - [PostgREST Documentation](https://docs.postgrest.org/en/v12/) - [PostgREST Tutorials](https://postgrest.org/en/v12/tutorials/tut0.html) Source code: - [PostgREST GitHub Repository](https://github.com/PostgREST/postgrest) ## What is PostgREST? PostgREST is a standalone web server that automatically turns your PostgreSQL database schema into a RESTful API. It uses the database's structure, constraints, and permissions to create API endpoints without requiring you to write any backend code. The API follows REST conventions and supports full CRUD operations, filtering, pagination, and even complex joins. This guide shows you how to set up PostgREST with a Neon Postgres database using Docker for local development. You'll learn how to configure basic read access, add authenticated endpoints with JWT tokens, and implement row-level security for fine-grained access control. ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Set up your database From the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor), set up your database using the following queries: ```sql CREATE SCHEMA api; CREATE TABLE api.students ( id SERIAL PRIMARY KEY, first_name TEXT NOT NULL, last_name TEXT NOT NULL ); INSERT INTO api.students (first_name, last_name) VALUES ('Ada', 'Lovelace'), ('Alan', 'Turing'); CREATE ROLE anonymous NOLOGIN; GRANT anonymous TO neondb_owner; GRANT USAGE ON SCHEMA api TO anonymous; GRANT SELECT ON ALL TABLES IN SCHEMA api TO anonymous; ALTER DEFAULT PRIVILEGES IN SCHEMA api GRANT SELECT ON TABLES TO anonymous; ``` **Tip**: While this example uses SERIAL for simplicity, consider using UUID as a primary key in production systems—especially in distributed environments, when exposing identifiers in URLs, or when avoiding predictable sequences is important. ## Copy your database connection string Retrieve an unpooled database connection string — PostgREST requires a direct connection to your database. 1. Navigate to your **Project Dashboard** in the Neon Console. 2. Click the **Connect** button to open the **Connect to your database modal**. 3. Toggle **Connection pooling** to disable it — you need an unpooled connection string. 4. Copy the connection string. ## Run PostgREST Use Docker to run PostgREST locally, specifying the **unpooled** database connection string. Tab: Linux ```bash docker run --rm --net=host \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_DB_ANON_ROLE="anonymous" \ postgrest/postgrest ``` Tab: macOS ```bash docker run --rm \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_DB_ANON_ROLE="anonymous" \ -p 3000:3000 \ postgrest/postgrest ``` Tab: Windows ```bash docker run --rm \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_DB_ANON_ROLE="anonymous" \ -p 3000:3000 \ postgrest/postgrest ``` Once running, visit http://localhost:3000/students to confirm the API is working. You should see the following records in your browser: ## Add authenticated access To support full CRUD operations (inserts, updates, and deletes), you need to set up permissions in your database by creating a role for authenticated users. Here, we create an `authenticated` role, assign privileges, and grant the role to our database owner (`neondb_owner`). Run these commands from the Neon SQL Editor or an SQL client. ```sql CREATE ROLE authenticated NOLOGIN; GRANT authenticated TO neondb_owner; GRANT USAGE ON SCHEMA api TO authenticated; GRANT ALL ON ALL TABLES IN SCHEMA api TO authenticated; GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA api TO authenticated; ``` ## Run PostgREST with a JWT secret Run PostgREST again, this time with a JWT secret that will be used by PostgREST to verify the JWT that we will attach to our requests in a later step. Tab: Linux ```bash docker run --rm --net=host \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_JWT_SECRET="reallyreallyreallyreallyverysafe" \ -e PGRST_DB_ANON_ROLE="anonymous" \ postgrest/postgrest ``` Tab: macOS ```bash docker run --rm \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_JWT_SECRET="reallyreallyreallyreallyverysafe" \ -e PGRST_DB_ANON_ROLE="anonymous" \ -p 3000:3000 \ postgrest/postgrest ``` Tab: Windows ```bash docker run --rm \ -e PGRST_DB_URI="" \ -e PGRST_DB_SCHEMA="api" \ -e PGRST_JWT_SECRET="reallyreallyreallyreallyverysafe" \ -e PGRST_DB_ANON_ROLE="anonymous" \ -p 3000:3000 \ postgrest/postgrest ``` ## Authenticate requests using JWT Now that we have defined our JWT secret above (`reallyreallyreallyreallyverysafe`), we'll create a sample JWT that's signed with this secret. If you didn't change the secret used above, you can use this JWT: ```text eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCJ9.XOGSeHS8usEzEELkUl8SWOrsOLP7xWmHckRSTgpyP3o ``` **Tip**: You can use [jwt.io](https://jwt.io/) to generate your own JWT. Make sure to use the **HS256** algorithm. Now let's test different CRUD operations using standard HTTP methods. Notice that we've attached the JWT in the `Authorization` header as a bearer token. **Insert a student:** ```bash curl http://localhost:3000/students \ -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCJ9.XOGSeHS8usEzEELkUl8SWOrsOLP7xWmHckRSTgpyP3o" \ -d '{"first_name": "Grace", "last_name": "Hopper"}' ``` You should see the following records after refreshing your browser: **Update a student:** ```bash curl "http://localhost:3000/students?id=eq.1" \ -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCJ9.XOGSeHS8usEzEELkUl8SWOrsOLP7xWmHckRSTgpyP3o" \ -d '{"first_name": "Ada I.", "last_name": "Lovelace"}' ``` Refresh your browser to see the updated records: **Delete a student:** ```bash curl "http://localhost:3000/students?id=eq.3" \ -X DELETE \ -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCJ9.XOGSeHS8usEzEELkUl8SWOrsOLP7xWmHckRSTgpyP3o" ``` You should now see these records: ## Use Row-Level Security (RLS) PostgREST supports Postgres RLS for fine-grained access control. Here's an example policy to restrict access to a user's own records. Run these statements on your database in the Neon SQL Editor or an SQL client. ```sql ALTER TABLE api.students ENABLE ROW LEVEL SECURITY; CREATE POLICY students_policy ON api.students FOR ALL TO authenticated USING (id = (SELECT current_setting('request.jwt.claims', true)::json->>'sub')::integer) WITH CHECK (id = (SELECT current_setting('request.jwt.claims', true)::json->>'sub')::integer); ``` The JWT token used in the CRUD examples above contains a payload with `{"role": "authenticated"}`, which tells PostgREST to use the `authenticated` role for those requests. In a real application, you would want to: - Generate tokens with proper expiration times - Include user-specific claims in the JWT (most likely, a "sub" field which corresponds to users' IDs) - Implement a proper authentication server/service (or use a third-party managed auth provider) Now let's test this with a JWT that includes a user ID. We'll create a new JWT with a payload that includes a user ID in the "sub" claim: Now, let's generate a new JWT that has the following payload defining the student ID (and sign it with the same JWT secret from above): ``` { "role": "authenticated", "sub": "1" ``` Here's the new token: ```text eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCIsInN1YiI6IjEifQ.U_EgeU0y0pAM5cTsMXndJe_cR1vG5Vf9dq4DkqfMAxs ``` Now, run this command with the new token: ```bash $ curl "http://localhost:3000/students" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYXV0aGVudGljYXRlZCIsInN1YiI6IjEifQ.U_EgeU0y0pAM5cTsMXndJe_cR1vG5Vf9dq4DkqfMAxs" [{"id":1,"first_name":"Ada I.","last_name":"Lovelace"}] ``` Because the `students` table has a RLS policy attached to the student's ID, the student can only view their own records. ## Summary The examples shown above are simple, but they illustrate how PostgREST works. With Neon and PostgREST, you can instantly turn your Postgres database into a REST API—no backend code required. This setup is ideal for rapid prototyping, internal tools, or even production workloads where you want to focus on your data and business logic rather than boilerplate API code. ## Next steps Now that you have a basic PostgREST API running with Neon, here are some things you can try next: - **Explore advanced querying**: Implement [filtering](https://docs.postgrest.org/en/v12/api.html#horizontal-filtering-rows), [ordering](https://docs.postgrest.org/en/v12/api.html#ordering), and [pagination](https://docs.postgrest.org/en/v12/api.html#limits-and-pagination) in your API requests - **Add resource embedding**: Use [resource embedding](https://docs.postgrest.org/en/v12/api.html#resource-embedding) to fetch related data in a single request - **Implement stored procedures**: Expose [PostgreSQL functions](https://docs.postgrest.org/en/v12/api.html#stored-procedures) as API endpoints for complex operations - **Example applications**: Explore [example applications](https://docs.postgrest.org/en/v12/ecosystem.html#example-apps) built with PostgREST to get inspiration for your own projects - **Try out templates**: These [templates](https://docs.postgrest.org/en/v12/ecosystem.html#templates) combine PostgREST with various frontend technologies --- # Source: https://neon.com/llms/guides-prisma-migrations.txt # Schema migration with Neon Postgres and Prisma ORM > The document outlines the process of performing schema migrations in Neon Postgres using Prisma ORM, detailing steps for setting up the environment, executing migrations, and managing database changes effectively. ## Source - [Schema migration with Neon Postgres and Prisma ORM HTML](https://neon.com/docs/guides/prisma-migrations): The original HTML version of this documentation [Prisma](https://www.prisma.io/) is an open-source ORM for Node.js and Typescript, known for its ease of use and focus on type safety. It supports many databases, including Postgres, and provides a robust system for managing database schemas and migrations. This guide walks you through using `Prisma` ORM with a `Neon` Postgres database in a Javascript project. We'll create a Node.js application, set up Prisma, and show how to run migrations using Prisma. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select an existing project or click the `New Project` button to create a new one. ### Retrieve your Neon database connection string You can find your Neon database connection string by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Setting Up the Node application ### Create a new Node project We'll create a simple catalog, with API endpoints that query the database for authors and a list of their books. Run the following command in your terminal to set up a new project using `Express.js`: ```bash mkdir neon-prisma-guide && cd neon-prisma-guide npm init -y && touch .env index.js npm pkg set type="module" && npm pkg set scripts.start="node index.js" npm install express ``` To use the Prisma ORM for making queries, install the `@prisma/client` package and the Prisma CLI. The CLI is only needed as a development dependency to generate the Prisma Client for the given schema. ```bash npm install @prisma/client && npm install prisma --save-dev npx prisma init ``` These commands create a new `prisma` folder in your project with a `schema.prisma` file, where we will define the database schema for our application. ### Configure Prisma to Use Neon Database Open the `prisma/schema.prisma` file and update the `datasource db` block with your Neon database connection details: ```prisma datasource db { provider = "postgresql" url = env("DATABASE_URL") } ``` Add the `DATABASE_URL` environment variable to your `.env` file, which you'll use to connect to your Neon database. Use the connection string that you obtained from the Neon Console earlier: ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` ### Define the Database schema In the `prisma/schema.prisma` file, add the following model definitions: ```prisma model Author { @@map("authors") id Int @id @default(autoincrement()) name String bio String? createdAt DateTime @default(now()) @map("created_at") books Book[] } model Book { @@map("books") id Int @id @default(autoincrement()) title String authorId Int @map("author_id") createdAt DateTime @default(now()) @map("created_at") author Author @relation(fields: [authorId], references: [id]) } ``` Two models are defined above: `Author`, which contains information about authors, and `Book`, for details about published books. The `Book` model includes a foreign key that references the `Author` model. ### Generate Prisma client and run migrations To create and apply migrations based on your schema, run the following command in the terminal: ```bash npx prisma migrate dev --name init ``` This command generates migration files written in SQL corresponding to our schema definitions and applies them to create the tables in your Neon database. We used the `--name` flag to name the migration. The command also generates a Prisma Client that is aware of our schemas: ```javascript import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); ``` We'll use this client later to interact with the database. ### Seed the Database To test that the application works, we need to add some example data to our tables. Create a `seed.js` file in your project and add the following code to it: ```javascript // seed.js import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); const seed = async () => { const authors = [ { name: 'J.R.R. Tolkien', bio: 'The creator of Middle-earth and author of The Lord of the Rings.', books: { create: [ { title: 'The Hobbit' }, { title: 'The Fellowship of the Ring' }, { title: 'The Two Towers' }, { title: 'The Return of the King' }, ], }, }, { name: 'George R.R. Martin', bio: 'The author of the epic fantasy series A Song of Ice and Fire.', books: { create: [{ title: 'A Game of Thrones' }, { title: 'A Clash of Kings' }], }, }, { name: 'J.K. Rowling', bio: 'The creator of the Harry Potter series.', books: { create: [ { title: "Harry Potter and the Philosopher's Stone" }, { title: 'Harry Potter and the Chamber of Secrets' }, ], }, }, ]; for (const author of authors) { await prisma.author.create({ data: author, }); } }; async function main() { try { await seed(); console.log('Seeding completed'); } catch (error) { console.error('Error during seeding:', error); process.exit(1); } finally { await prisma.$disconnect(); } } main(); ``` Run the seed script to populate the database with the initial data: ```bash node seed.js ``` You should see the `Seeding completed` message in the terminal, indicating that the seed data was inserted into the database. ### Implementing the API Endpoints Now that the database is set up and populated with data, we can implement the API to query the authors and their books. We'll use [Express](https://expressjs.com/), which is a minimal web application framework for Node.js. Create an `index.ts` file at the project root, and add the following code to set up your Express server: ```javascript import express from 'express'; import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); const app = express(); const port = process.env.PORT || 3000; app.get('/', async (req, res) => { res.send('Hello World! This is a book catalog.'); }); app.get('/authors', async (req, res) => { const authors = await prisma.author.findMany(); res.json(authors); }); app.get('/books/:author_id', async (req, res) => { const authorId = parseInt(req.params.author_id); const books = await prisma.book.findMany({ where: { authorId: authorId, }, }); res.json(books); }); // Start the server app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); ``` This code sets up a simple API with two endpoints: `/authors` and `/books/:authorId`. The `/authors` endpoint returns a list of all the authors, and the `/books/:authorId` endpoint returns a list of books written by the specific author with the given `authorId`. Run the application using the following command: ```bash npm run start ``` This will start the server at `http://localhost:3000`. Navigate to `http://localhost:3000/authors` and `http://localhost:3000/books/1` in your browser to check that the API works as expected. ## Migration after a schema change To demonstrate how to execute a schema change, we'll add a new column to the `authors` table, listing the country of origin for each author. ### Update the Prisma model Modify the `Author` model in the `prisma/schema.prisma` file to add the new `country` field: ```prisma model Author { @@map("authors") id Int @id @default(autoincrement()) name String bio String? country String? createdAt DateTime @default(now()) @map("created_at") books Book[] } ``` ### Generate and apply the migration Run the following command to generate a new migration and apply it to the database: ```bash npx prisma migrate dev --name add-country ``` This command generates a new migration file to add the new field and applies it to the database. It also updates the Prisma client to reflect the change in the schema. ### Verify the migration To verify the migration, run the application again: ```bash npm run start ``` You can navigate to `http://localhost:3000/authors` in your browser to check that each author entry has a `country` field, currently set to `null`. ## Conclusion In this guide, we set up a new Javascript project using `Express.js` and `Prisma` ORM and connected it to a `Neon` Postgres database. We created a schema for the database, generated and ran migrations, and implemented API endpoints to query the database. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Prisma](https://github.com/neondatabase/guide-neon-prisma): Run Neon database migrations using Prisma ## Resources For more information on the tools used in this guide, refer to the following resources: - [Prisma ORM](https://www.prisma.io/) - [Express.js](https://expressjs.com/) --- # Source: https://neon.com/llms/guides-prisma.txt # Connect from Prisma to Neon > The document outlines the steps for establishing a connection between Prisma and Neon, detailing configuration settings and code examples to facilitate seamless integration within Neon's environment. ## Source - [Connect from Prisma to Neon HTML](https://neon.com/docs/guides/prisma): The original HTML version of this documentation Prisma is an open-source, next-generation ORM that lets you to manage and interact with your database. This guide covers the following topics: - [Connect to Neon from Prisma](https://neon.com/docs/guides/prisma#connect-to-neon-from-prisma) - [Use connection pooling with Prisma](https://neon.com/docs/guides/prisma#use-connection-pooling-with-prisma) - [Use the Neon serverless driver with Prisma](https://neon.com/docs/guides/prisma#use-the-neon-serverless-driver-with-prisma) - [Connection timeouts](https://neon.com/docs/guides/prisma#connection-timeouts) - [Connection pool timeouts](https://neon.com/docs/guides/prisma#connection-pool-timeouts) - [JSON protocol for large Prisma schemas](https://neon.com/docs/guides/prisma#json-protocol-for-large-prisma-schemas) ## Connect to Neon from Prisma To establish a basic connection from Prisma to Neon, perform the following steps: 1. Retrieve your Neon connection string. You can find it by clicking the **Connect** button on your **Project Dashboard**. Select a branch, a user, and the database you want to connect to. A connection string is constructed for you. The connection string includes the user name, password, hostname, and database name. 2. Add the following lines to your `prisma/schema.prisma` file to identify the data source and database URL: ```typescript datasource db { provider = "postgresql" url = env("DATABASE_URL") } ``` 3. Add a `DATABASE_URL` variable to your `.env` file and set it to the Neon connection string that you copied in the previous step. We also recommend adding `?sslmode=require&channel_binding=require` to the end of the connection string to ensure a [secure connection](https://neon.com/docs/connect/connect-securely). Your setting will appear similar to the following: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` **Important**: If you plan to use Prisma Client from a serverless function, see [Use connection pooling with Prisma](https://neon.com/docs/guides/prisma#use-connection-pooling-with-prisma) for additional configuration instructions. To adjust your connection string to avoid connection timeout issues, see [Connection timeouts](https://neon.com/docs/guides/prisma#connection-timeouts). ## Use connection pooling with Prisma Serverless functions can require a large number of database connections as demand increases. If you use serverless functions in your application, we recommend that you use a pooled Neon connection string, as shown: ```ini # Pooled Neon connection string DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` A pooled Neon connection string adds `-pooler` to the endpoint ID, which tells Neon to use a pooled connection. You can add `-pooler` to your connection string manually or copy a pooled connection string from **Connect to your database** modal — click **Connect** on your Project Dashboard to open the modal. ### Connection pooling with Prisma Migrate Prior to Prisma ORM 5.10, attempting to run Prisma Migrate commands, such as `prisma migrate dev`, with a pooled connection caused the following error: ```text Error undefined: Database error Error querying the database: db error: ERROR: prepared statement "s0" already exists ``` To avoid this issue, you can define a direct connection to the database for Prisma Migrate or you can upgrade Prisma ORM to 5.10 or higher. #### Using a direct connection to the database You can configure a direct connection while allowing applications to use Prisma Client with a pooled connection by adding a `directUrl` property to the datasource block in your `schema.prisma` file. For example: ```typescript datasource db { provider = "postgresql" url = env("DATABASE_URL") directUrl = env("DIRECT_URL") } ``` **Note**: The `directUrl` property is available in Prisma version [4.10.0](https://github.com/prisma/prisma/releases/tag/4.10.0) and higher. For more information about this property, refer to the [Prisma schema reference](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#fields). After adding the `directUrl` property to your `schema.prisma` file, update the `DATABASE_URL` and `DIRECT_URL` variables settings in your `.env` file: 1. Set `DATABASE_URL` to the pooled connection string for your Neon database. Applications that require a pooled connection should use this connection. 1. Set `DIRECT_URL` to the direct (non-pooled) connection string. This is the direct connection to the database required by Prisma Migrate. Other Prisma CLI operations may also require a direct connection. When you finish updating your `.env` file, your variable settings should appear similar to the following: ```ini # Pooled Neon connection string DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" # Unpooled Neon connection string DIRECT_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` #### Using a pooled connection with Prisma Migrate With Prisma ORM 5.10 or higher, you can use a pooled Neon connection string with Prisma Migrate. In this case, you only need to define the pooled connection string in your `schema.prisma` file. Adding a `directUrl` property to the datasource block in your `schema.prisma` file and defining a `DIRECT_URL` setting in your environment file are not required. Your complete configuration will look like this: `schema.prisma` file: ```typescript datasource db { provider = "postgresql" url = env("DATABASE_URL") } ``` `.env` file: ```ini # Pooled Neon connection string DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` ## Use the Neon serverless driver with Prisma The Neon serverless driver is a low-latency Postgres driver for JavaScript and TypeScript that lets you query data from serverless and edge environments. For more information about the driver, see [Neon serverless driver](https://neon.com/docs/serverless/serverless-driver). To set up Prisma with the Neon serverless driver, use the Prisma driver adapter. This adapter allows you to choose a different database driver than Prisma's default driver for communicating with your database. The Prisma driver adapter feature is available in **Preview** in Prisma version 5.4.2 and later. To get started, enable the `driverAdapters` Preview feature flag in your `schema.prisma` file, as shown: ```javascript generator client { provider = "prisma-client-js" previewFeatures = ["driverAdapters"] } datasource db { provider = "postgresql" url = env("DATABASE_URL") } ``` Next, generate the Prisma Client: ```bash npx prisma generate ``` Install the Prisma adapter for Neon, the Neon serverless driver, and `ws` packages: ```bash npm install ws @prisma/adapter-neon @neondatabase/serverless npm install -D @types/ws ``` Update your Prisma Client instance: ```javascript import 'dotenv/config'; import { PrismaClient } from '@prisma/client'; import { PrismaNeon } from '@prisma/adapter-neon'; import { neonConfig } from '@neondatabase/serverless'; import ws from 'ws'; neonConfig.webSocketConstructor = ws; // To work in edge environments (Cloudflare Workers, Vercel Edge, etc.), enable querying over fetch // neonConfig.poolQueryViaFetch = true // Type definitions // declare global { // var prisma: PrismaClient | undefined // } const connectionString = `${process.env.DATABASE_URL}`; const adapter = new PrismaNeon({ connectionString }); const prisma = global.prisma || new PrismaClient({ adapter }); if (process.env.NODE_ENV === 'development') global.prisma = prisma; export default prisma; ``` You can now use Prisma Client as you normally would with full type-safety. Prisma Migrate, introspection, and Prisma Studio will continue working as before, using the Neon connection string defined by the `DATABASE_URL` variable in your `schema.prisma` file. **Note**: If you encounter a `TypeError: bufferUtil.mask is not a function` error when building your application, this is likely due to a missing dependency that the `ws` module requires when using `Client` and `Pool` constructs. You can address this requirement by installing the `bufferutil` package: ```shell npm i -D bufferutil ``` ## Connection timeouts A connection timeout that occurs when connecting from Prisma to Neon causes an error similar to the following: ```text Error: P1001: Can't reach database server at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432` Please make sure your database server is running at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432`. ``` This error most likely means that the Prisma query engine timed out before the Neon compute was activated. A Neon compute has two main states: _Active_ and _Idle_. Active means that the compute is currently running. If there is no query activity for 5 minutes, Neon places a compute into an idle state by default. When you connect to an idle compute from Prisma, Neon automatically activates it. Activation typically happens within a few seconds but added latency can result in a connection timeout. To address this issue, you can adjust your Neon connection string by adding a `connect_timeout` parameter. This parameter defines the maximum number of seconds to wait for a new connection to be opened. The default value is 5 seconds. A higher setting may provide the time required to avoid connection timeouts. For example: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require&connect_timeout=10" ``` **Note**: A `connect_timeout` setting of 0 means no timeout. ## Connection pool timeouts Another possible cause of timeouts is [Prisma's connection pool](https://www.prisma.io/docs/concepts/components/prisma-client/working-with-prismaclient/). The Prisma query engine manages a pool of connections. The pool is instantiated when a Prisma Client opens a first connection to the database. For an explanation of how this connection pool functions, read [How the connection pool works](https://www.prisma.io/docs/concepts/components/prisma-client/working-with-prismaclient/connection-pool#how-the-connection-pool-works), in the _Prisma documentation_. The default size of the Prisma connection pool is determined by the following formula: `num_physical_cpus * 2 + 1`, where `num_physical_cpus` represents the number of physical CPUs on the machine where your application runs. For example, if your machine has four physical CPUs, your connection pool will contain nine connections (4 \* 2 + 1 = 9). As mentioned in the [Prisma documentation](https://www.prisma.io/docs/concepts/components/prisma-client/working-with-prismaclient/connection-pool#default-connection-pool-size), this formula is a good starting point, but the recommended connection limit also depends on your deployment paradigm — particularly if you are using serverless. You can specify the number of connections explicitly by setting the `connection_limit` parameter in your database connection URL. For example: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require&connect_timeout=15&connection_limit=20" ``` For configuration guidance, refer to Prisma's [Recommended connection pool size guide](https://www.prisma.io/docs/guides/performance-and-optimization/connection-management#recommended-connection-pool-size). In addition to pool size, you can configure a `pool_timeout` setting. This setting defines the amount of time the Prisma Client query engine has to process a query before it throws an exception and moves on to the next query in the queue. The default `pool_timeout` setting is 10 seconds. If you still experience timeouts after increasing `connection_limit` setting, you can try setting the `pool_timeout` parameter to a value larger than the default (10 seconds). For configuration guidance, refer to [Increasing the pool timeout](https://www.prisma.io/docs/guides/performance-and-optimization/connection-management#increasing-the-pool-timeout), in the _Prisma documentation_. ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require&connect_timeout=15&connection_limit=20&pool_timeout=15" ``` You can disable pool timeouts by setting `pool_timeout=0`. ## JSON protocol for large Prisma schemas If you are working with a large Prisma schema, Prisma recently introduced a `jsonProtocol` wire protocol feature that expresses queries using `JSON` instead of GraphQL. The JSON implementation uses less CPU and memory, which can help reduce latencies when connecting from Prisma. `jsonProtocol` is the default wire protocol as of Prisma version 5.0.0. If you run Prisma version 5.0.0 or later, you are already using the new protocol. If you run Prisma version 4 or earlier, you must use a feature flag to enable the `jsonProtocol`. You can read more about this feature here: [jsonProtocol changes](https://www.prisma.io/docs/guides/upgrade-guides/upgrading-versions/upgrading-to-prisma-5/jsonprotocol-changes). ## Learn more For additional information about connecting from Prisma, refer to the following resources in the _Prisma documentation_: - [Connection management](https://www.prisma.io/docs/guides/performance-and-optimization/connection-management) - [Database connection issues](https://www.prisma.io/dataguide/managing-databases/database-troubleshooting#database-connection-issues) - [PostgreSQL database connector](https://www.prisma.io/docs/concepts/database-connectors/postgresql) - [Increasing the pool timeout](https://www.prisma.io/docs/guides/performance-and-optimization/connection-management#increasing-the-pool-timeout) - [Schema migration with Neon Postgres and Prisma ORM](https://neon.com/docs/guides/prisma-migrations) --- # Source: https://neon.com/llms/guides-project-collaboration-guide.txt # Project collaboration > The "Project Collaboration" document guides Neon users on how to effectively collaborate on database projects by managing roles, permissions, and access within the Neon platform. ## Source - [Project collaboration HTML](https://neon.com/docs/guides/project-collaboration-guide): The original HTML version of this documentation You can invite other users to collaborate with you on a Neon project. Project collaboration lets other users access and contribute to your project from all supported Neon interfaces, including the Neon Console, Neon API, and Neon CLI. Follow this guide to learn how. **Note**: Use project collaboration to work with people outside your organization. If you're working with team members, create an [Organization](https://neon.com/docs/manage/organizations) instead. Organization members get automatic access to all projects within that organization. Organizations can still use project collaboration when needed — for example, to allow an external contractor to contribute to a specific project without making them a full organization member. ## Set up Neon accounts You can invite anyone outside your organization to collaborate on your Neon project. To collaborate on a project, the user must have a Neon account, which can be a Neon Free plan or a paid plan account. 1. If the user does not have a Neon account, ask them to sign up. You can provide your users with the following instructions: [Sign up](https://neon.com/docs/get-started/signing-up). 2. Request the email address the user signed up with. If the user signed up using Google or GitHub, request the email address associated with that account. ## Invite collaborators After a user has provided you with the email address associated with their Neon account, you can invite them to your project. **To invite a collaborator to your project:** 1. Navigate to the [Neon Console](https://console.neon.tech/app/projects). 2. Select the project you want to invite collaborators to. 3. In the Neon **Settings**, choose **Collaborators** from the sidebar. 4. Click **Invite**. In the modal that pops up, enter the email address of the person you'd like to invite. You can enter multiple emails separated by commas. 5. Click **Invite** in the modal to confirm; the specified email(s) will be added to the list of **Collaborators**. 6. Review the list of collaborators to verify the user was successfully added. The invited users will be granted access to the project, but they will not have privileges to delete the project. They can also invite other users to join the collaboration. When they log into Neon, the project will appear under the **Projects** section, listed as **Shared with me**. An email is sent to the invited users informing them of the project invitation, including an **Open project** link for easy access. **Note** Invites not received?: If invite emails aren't received, they may be in spam or quarantined. Recipients should check these folders and mark Neon emails as safe. ## Project collaboration limits When you invite a user to your project, they operate under _your_ project allowances so long as they're using your project. For example, a Neon Free plan user is limited to 10 branches per project, but if they are using your project, there is no such restriction. For teams working together frequently across multiple projects, [organization](https://neon.com/docs/manage/organizations) membership offers a better collaboration experience. ### Access for collaborators via the Neon API or CLI Collaborators you invite to a project can access it from all supported Neon interfaces, including the Neon Console, [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), and [Neon CLI](https://neon.com/docs/reference/neon-cli). Collaborators can use their own API key to access the project via the Neon API. See [Manage API keys](https://neon.com/docs/manage/api-keys) for details on generating an API key. When using the Neon CLI, collaborators authenticate as they normally would. They can access both their own Neon projects and any projects they are collaborating on. See [Neon CLI — Connect](https://neon.com/docs/reference/cli-install#connect) for authentication instructions. ## Billing for projects with collaborators All costs associated with a project are charged to the Neon account that owns it. For example, if you invite someone to collaborate on your project, any usage incurred by that collaborator will be billed to your Neon account. --- # Source: https://neon.com/llms/guides-protected-branches.txt # Protected branches > The "Protected branches" documentation outlines how to configure and manage protected branches in Neon, detailing the steps to restrict changes and enforce branch protection rules to maintain code integrity. ## Source - [Protected branches HTML](https://neon.com/docs/guides/protected-branches): The original HTML version of this documentation Neon's protected branches feature implements a series of protections: - Protected branches cannot be deleted. - Protected branches cannot be [reset](https://neon.com/docs/manage/branches#reset-a-branch-from-parent). - Projects with protected branches cannot be deleted. - Computes associated with a protected branch cannot be deleted. - New passwords are automatically generated for Postgres roles on branches created from protected branches. [See below](https://neon.com/docs/guides/protected-branches#new-passwords-generated-for-postgres-roles-on-child-branches). - With additional configuration steps, you can apply IP Allow restrictions to protected branches only. The [IP Allow](https://neon.com/docs/introduction/ip-allow) feature is available on the Neon [Scale](https://neon.com/docs/introduction/plans) plan. See [below](https://neon.com/docs/guides/protected-branches#how-to-apply-ip-restrictions-to-protected-branches). - Protected branches are not [archived](https://neon.com/docs/guides/branch-archiving) due to inactivity. The protected branches feature is available on Neon [paid plans](https://neon.com/docs/introduction/plans). - The **Launch** plan supports up to 2 protected branches - The **Scale** plan supports up to 5 protected branches ## Set a branch as protected This example sets branch as protected. To set a branch as protected: 1. In the Neon Console, select a project. 2. Select **Branches** to view the branches for the project. 3. Select a branch from the table. In this example, we'll configure our default branch `production` as a protected branch. 4. On the branch page, click **Protect**. 5. In the **Set as protected** confirmation dialog, click **Set as protected** to confirm your selection. Your branch is now designated as protected, as indicated by the protected branch shield icon, shown below. The protected branch designation also appears on your **Branches** page. ## New passwords generated for Postgres roles on child branches When you create a branch in Neon, it includes all Postgres databases and roles from the parent branch. By default, Postgres roles on the child branch will have the same passwords as on the parent branch. However, this does not apply to protected branches. When you create a child branch from a protected branch, new passwords are generated for the matching Postgres roles on the child branch. This behavior is designed to prevent the exposure of passwords that could be used to access your protected branch. For example, if you have designated a production branch as protected, the automatic password change for child branches ensures that you can create child branches for development or testing without risking access to data on your production branch. Please note that resetting or restoring a child branch from a protected parent branch preserves passwords for matching Postgres roles on the child branch. Please refer to the feature notes below for more. **Important** Feature notes: - The "new password" feature for child branches was released on July, 31, 2024. If you have existing CI scripts that create branches from protected branches, please be aware that passwords for matching Postgres roles on those newly created branches will now differ. If you depend on those passwords being the same, you'll need to make adjustments to get the correct connection details for those branches. - After a branch is created, the up-to-date connection string is returned in the output of the [Create Branch GitHub Action](https://neon.com/docs/guides/branching-github-actions#create-branch-action). - The [Reset Branch GitHub Action](https://neon.com/docs/guides/branching-github-actions#reset-from-parent-action) also outputs connection string values, in case you are using this action in your workflows. - The Neon CLI supports a [connection-string](https://neon.com/docs/reference/cli-connection-string) command for retrieving a branch's connection string. - Prior to September, 6, 2024, resetting or restoring a child branch from a protected parent branch restored passwords for matching Postgres roles on the child branch to those used on the protected parent branch. As of September, 6, 2024, passwords for matching Postgres roles on the child branch are preserved when resetting or restoring a child branch from a protected parent branch. ## How to apply IP restrictions to protected branches On plans that support it, you can use the protected branches feature in combination with Neon's [IP Allow](https://neon.com/docs/introduction/ip-allow) feature to apply IP access restrictions to protected branches only. The basic setup steps are: 1. [Define an IP allowlist for your project](https://neon.com/docs/guides/protected-branches#define-an-ip-allowlist-for-your-project) 2. [Restrict IP access to protected branches only](https://neon.com/docs/guides/protected-branches#restrict-ip-access-to-protected-branches-only) 3. [Set a branch as protected](https://neon.com/docs/guides/protected-branches#set-a-branch-as-protected) (if you have not done so already) ### Define an IP allowlist for your project Tab: Neon Console To configure an allowlist: 1. Select a project in the Neon Console. 2. On the Project Dashboard, select **Settings**. 3. Select **Network Security**. 4. Under **IP Allow**, specify the IP addresses you want to permit. Separate multiple entries with commas. 5. Click **Save changes**. Tab: CLI The [Neon CLI ip-allow command](https://neon.com/docs/reference/cli-ip-allow) supports IP Allow configuration. For example, the following `add` command adds IP addresses to the allowlist for an existing Neon project. Multiple entries are separated by a space. No delimiter is required. ```bash neon ip-allow add 203.0.113.0 203.0.113.1 ┌─────────────────────┬─────────────────────┬──────────────┬─────────────────────┐ │ Id │ Name │ IP Addresses │ Protected Only │ ├─────────────────────┼─────────────────────┼──────────────┼─────────────────────┤ │ wispy-haze-26469780 │ wispy-haze-26469780 │ 203.0.113.0 │ false │ │ │ │ 203.0.113.1 │ │ └─────────────────────┴─────────────────────┴──────────────┴─────────────────────┘ ``` To apply an IP allowlist to protected branches only, you can use the `--protected-only` option: ```bash neon ip-allow add 203.0.113.1 --protected-only ``` To reverse that setting, use `--protected-only false`. ```bash neon ip-allow add 203.0.113.1 --protected-only false ``` Tab: API The [Create project](https://api-docs.neon.tech/reference/createproject) and [Update project](https://api-docs.neon.tech/reference/updateproject) methods support **IP Allow** configuration. For example, the following API call configures **IP Allow** for an existing Neon project. Separate multiple entries with commas. Each entry must be quoted. You can set the `"protected_branches_only` option to `true` to apply the allowlist to protected branches only, or `false` to apply it to all branches in your Neon project. ```bash curl -X PATCH \ https://console.neon.tech/api/v2/projects/falling-salad-31638542 \ -H 'accept: application/json' \ -H 'authorization: Bearer $NEON_API_KEY' \ -H 'content-type: application/json' \ -d ' { "project": { "settings": { "allowed_ips": { "protected_branches_only": true, "ips": [ "203.0.113.0", "203.0.113.1" ] } } } } ' | jq ``` For details about specifying IP addresses, see [How to specify IP addresses](https://neon.com/docs/manage/projects#how-to-specify-ip-addresses). ### Restrict IP access to protected branches only After defining an IP allowlist, the next step is to select the **Restrict access to protected branches only** option. This option removes IP restrictions from _all branches_ in your Neon project and applies them to protected branches only. After you've selected the protected branches option, click **Save changes** to apply the new configuration. ## Remove branch protection Removing a protected branch designation can be performed by selecting **Set as unprotected** from the **More** drop-down menu on the branch page. --- # Source: https://neon.com/llms/guides-python.txt # Connect a Python application to Neon Postgres > The document outlines the steps to connect a Python application to a Neon database using the Psycopg library, detailing the necessary configurations and code examples for seamless integration. ## Source - [Connect a Python application to Neon Postgres HTML](https://neon.com/docs/guides/python): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Python application using popular Postgres drivers. We'll cover [Psycopg 3](https://www.psycopg.org/psycopg3/docs/), the latest generation of the popular synchronous adapter, its predecessor [Psycopg 2 (psycopg2)](https://pypi.org/project/psycopg2-binary/), and [asyncpg](https://pypi.org/project/asyncpg/), an asynchronous adapter for use with `asyncio`. You'll learn how to connect to your Neon database from a Python application and perform basic Create, Read, Update, and Delete (CRUD) operations. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - Python 3.8 or later. If you do not have Python installed, install it from the [Python website](https://www.python.org/downloads/). ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your Python application. ## Create a Python project For your Python project, create a project directory, set up a virtual environment, and install the required libraries. 1. Create a project directory and change into it. ```bash mkdir neon-python-quickstart cd neon-python-quickstart ``` > Open the directory in your preferred code editor (e.g., VS Code, PyCharm). 2. Create and activate a Python virtual environment. This isolates your project's dependencies from your system's Python environment. Tab: MacOS / Linux / Windows Subsystem for Linux (WSL) ```bash # Create a virtual environment python3 -m venv venv # Activate the virtual environment source venv/bin/activate ``` Tab: Windows ```bash # Create a virtual environment python -m venv venv # Activate the virtual environment .\venv\Scripts\activate ``` 3. Install the required libraries using `pip`. - `psycopg`: The modern, synchronous database adapter for connecting to Postgres (Psycopg 3). - `psycopg2-binary`: An older, widely-used synchronous database adapter. - `asyncpg`: The asynchronous database adapter for connecting to Postgres. - `python-dotenv`: A helper library to manage environment variables. ```bash pip install "psycopg[binary]" psycopg2-binary asyncpg python-dotenv ``` > Install the library that best fits your project needs. This guide provides examples for all three. ## Store your Neon connection string Create a file named `.env` in your project's root directory. This file will securely store your database connection string. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Copy the connection string, which includes your password. 4. Add the connection string to your `.env` file as shown below. ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` > Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your actual database credentials. ## Examples This section provides example Python scripts that demonstrate how to connect to your Neon database and perform basic operations such as [creating a table](https://neon.com/docs/guides/python#create-a-table-and-insert-data), [reading data](https://neon.com/docs/guides/python#read-data), [updating data](https://neon.com/docs/guides/python#update-data), and [deleting data](https://neon.com/docs/guides/python#delete-data). ### Create a table and insert data In your project directory, create a file named `create_table.py` and add the code for your preferred library. This script connects to your Neon database, creates a table named `books`, and inserts some sample data into it. Tab: psycopg (v3) ```python import os import psycopg from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() # Get the connection string from the environment variable conn_string = os.getenv("DATABASE_URL") try: with psycopg.connect(conn_string) as conn: print("Connection established") # Open a cursor to perform database operations with conn.cursor() as cur: # Drop the table if it already exists cur.execute("DROP TABLE IF EXISTS books;") print("Finished dropping table (if it existed).") # Create a new table cur.execute(""" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); """) print("Finished creating table.") # Insert a single book record cur.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES (%s, %s, %s, %s);", ("The Catcher in the Rye", "J.D. Salinger", 1951, True), ) print("Inserted a single book.") # Data to be inserted books_to_insert = [ ("The Hobbit", "J.R.R. Tolkien", 1937, True), ("1984", "George Orwell", 1949, True), ("Dune", "Frank Herbert", 1965, False), ] # Insert multiple books at once cur.executemany( "INSERT INTO books (title, author, publication_year, in_stock) VALUES (%s, %s, %s, %s);", books_to_insert, ) print("Inserted 3 rows of data.") # The transaction is committed automatically when the 'with' block exits in psycopg (v3) except Exception as e: print("Connection failed.") print(e) ``` Tab: psycopg2 ```python import os import psycopg2 from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() # Get the connection string from the environment variable conn_string = os.getenv("DATABASE_URL") conn = None try: with psycopg2.connect(conn_string) as conn: print("Connection established") # Open a cursor to perform database operations with conn.cursor() as cur: # Drop the table if it already exists cur.execute("DROP TABLE IF EXISTS books;") print("Finished dropping table (if it existed).") # Create a new table cur.execute(""" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); """) print("Finished creating table.") # Insert a single book record cur.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES (%s, %s, %s, %s);", ("The Catcher in the Rye", "J.D. Salinger", 1951, True), ) print("Inserted a single book.") # Data to be inserted books_to_insert = [ ("The Hobbit", "J.R.R. Tolkien", 1937, True), ("1984", "George Orwell", 1949, True), ("Dune", "Frank Herbert", 1965, False), ] # Insert multiple books at once cur.executemany( "INSERT INTO books (title, author, publication_year, in_stock) VALUES (%s, %s, %s, %s);", books_to_insert, ) print("Inserted 3 rows of data.") # Commit the changes to the database conn.commit() except Exception as e: print("Connection failed.") print(e) ``` Tab: asyncpg ```python import asyncio import os import asyncpg from dotenv import load_dotenv # Load environment variables from .env file load_dotenv() async def run(): # Get the connection string from the environment variable conn_string = os.getenv("DATABASE_URL") conn = None try: conn = await asyncpg.connect(conn_string) print("Connection established") # Drop the table if it already exists await conn.execute("DROP TABLE IF EXISTS books;") print("Finished dropping table (if it existed).") # Create a new table await conn.execute(""" CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE ); """) print("Finished creating table.") # Insert a single book record (using $1, $2 for placeholders) await conn.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);", "The Catcher in the Rye", "J.D. Salinger", 1951, True, ) print("Inserted a single book.") # Data to be inserted books_to_insert = [ ("The Hobbit", "J.R.R. Tolkien", 1937, True), ("1984", "George Orwell", 1949, True), ("Dune", "Frank Herbert", 1965, False), ] # Insert multiple books at once await conn.executemany( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4);", books_to_insert, ) print("Inserted 3 rows of data.") except Exception as e: print("Connection failed.") print(e) finally: if conn: await conn.close() # Run the asynchronous function asyncio.run(run()) ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database. - Drop the `books` table if it already exists to ensure a clean slate. - Create a table named `books` with columns for `id`, `title`, `author`, `publication_year`, and `in_stock`. - Insert a single book record. - Insert multiple book records. - Commit the changes to the database (in `psycopg`, this happens automatically on exiting the `with` block). Run the script using the following command: ```bash python create_table.py ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Inserted 3 rows of data. ``` ### Read data In your project directory, create a file named `read_data.py`. This script connects to your Neon database and retrieves all rows from the `books` table. Tab: psycopg (v3) ```python import os import psycopg from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") try: with psycopg.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Fetch all rows from the books table cur.execute("SELECT * FROM books ORDER BY publication_year;") rows = cur.fetchall() print("\n--- Book Library ---") for row in rows: print( f"ID: {row[0]}, Title: {row[1]}, Author: {row[2]}, Year: {row[3]}, In Stock: {row[4]}" ) print("--------------------\n") except Exception as e: print("Connection failed.") print(e) ``` Tab: psycopg2 ```python import os import psycopg2 from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") conn = None try: with psycopg2.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Fetch all rows from the books table cur.execute("SELECT * FROM books ORDER BY publication_year;") rows = cur.fetchall() print("\n--- Book Library ---") for row in rows: print( f"ID: {row[0]}, Title: {row[1]}, Author: {row[2]}, Year: {row[3]}, In Stock: {row[4]}" ) print("--------------------\n") except Exception as e: print("Connection failed.") print(e) ``` Tab: asyncpg ```python import asyncio import os import asyncpg from dotenv import load_dotenv load_dotenv() async def run(): conn_string = os.getenv("DATABASE_URL") conn = None try: conn = await asyncpg.connect(conn_string) print("Connection established") # Fetch all rows from the books table rows = await conn.fetch("SELECT * FROM books ORDER BY publication_year;") print("\n--- Book Library ---") for row in rows: # asyncpg rows can be accessed by index or column name print( f"ID: {row['id']}, Title: {row['title']}, Author: {row['author']}, Year: {row['publication_year']}, In Stock: {row['in_stock']}" ) print("--------------------\n") except Exception as e: print("Connection failed.") print(e) finally: if conn: await conn.close() asyncio.run(run()) ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database. - Use a SQL `SELECT` statement to fetch all rows from the `books` table, ordered by `publication_year`. - Print each book's details in a formatted output. Run the script using the following command: ```bash python read_data.py ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: False -------------------- ``` ### Update data In your project directory, create a file named `update_data.py`. This script connects to your Neon database and updates the stock status of the book 'Dune' to `True`. Tab: psycopg (v3) ```python import os import psycopg from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") try: with psycopg.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Update a data row in the table cur.execute( "UPDATE books SET in_stock = %s WHERE title = %s;", (True, "Dune") ) print("Updated stock status for 'Dune'.") except Exception as e: print("Connection failed.") print(e) ``` Tab: psycopg2 ```python import os import psycopg2 from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") conn = None try: with psycopg2.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Update a data row in the table cur.execute( "UPDATE books SET in_stock = %s WHERE title = %s;", (True, "Dune") ) print("Updated stock status for 'Dune'.") # Commit the changes conn.commit() except Exception as e: print("Connection failed.") print(e) ``` Tab: asyncpg ```python import asyncio import os import asyncpg from dotenv import load_dotenv load_dotenv() async def run(): conn_string = os.getenv("DATABASE_URL") conn = None try: conn = await asyncpg.connect(conn_string) print("Connection established") # Update a data row in the table await conn.execute( "UPDATE books SET in_stock = $1 WHERE title = $2;", True, "Dune" ) print("Updated stock status for 'Dune'.") except Exception as e: print("Connection failed.") print(e) finally: if conn: await conn.close() asyncio.run(run()) ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database. - Use a SQL `UPDATE` statement to change the `in_stock` status of the book 'Dune' to `True`. - Commit the changes to the database. Run the script using the following command: ```bash python update_data.py ``` After running this script, you can run `read_data.py` again to verify that the row was updated. ```bash python read_data.py ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: True -------------------- ``` > You can see that the stock status for 'Dune' has been updated to `True`. ### Delete data In your project directory, create a file named `delete_data.py`. This script connects to your Neon database and deletes the book '1984' from the `books` table. Tab: psycopg (v3) ```python import os import psycopg from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") try: with psycopg.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Delete a data row from the table cur.execute("DELETE FROM books WHERE title = %s;", ("1984",)) print("Deleted the book '1984' from the table.") except Exception as e: print("Connection failed.") print(e) ``` Tab: psycopg2 ```python import os import psycopg2 from dotenv import load_dotenv load_dotenv() conn_string = os.getenv("DATABASE_URL") conn = None try: with psycopg2.connect(conn_string) as conn: print("Connection established") with conn.cursor() as cur: # Delete a data row from the table cur.execute("DELETE FROM books WHERE title = %s;", ("1984",)) print("Deleted the book '1984' from the table.") # Commit the changes conn.commit() except Exception as e: print("Connection failed.") print(e) ``` Tab: asyncpg ```python import asyncio import os import asyncpg from dotenv import load_dotenv load_dotenv() async def run(): conn_string = os.getenv("DATABASE_URL") conn = None try: conn = await asyncpg.connect(conn_string) print("Connection established") # Delete a data row from the table await conn.execute("DELETE FROM books WHERE title = $1;", "1984") print("Deleted the book '1984' from the table.") except Exception as e: print("Connection failed.") print(e) finally: if conn: await conn.close() asyncio.run(run()) ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database. - Use a SQL `DELETE` statement to remove the book '1984' from the `books` table. - Commit the changes to the database. Run the script using the following command: ```bash python delete_data.py ``` After running this script, you can run `read_data.py` again to verify that the row was deleted. ```bash python read_data.py ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: True ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: True ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: True -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Next steps: Using an ORM or framework While this guide demonstrates how to connect to Neon using raw SQL queries, for more advanced and maintainable data interactions in your Python applications, consider using an Object-Relational Mapping (ORM) framework. ORMs not only let you work with data as objects but also help manage schema changes through automated migrations keeping your database structure in sync with your application models. Explore the following resources to learn how to integrate ORMs with Neon: - [Connect an SQLAlchemy application to Neon](https://neon.com/docs/guides/sqlalchemy) - [Connect a Django application to Neon](https://neon.com/docs/guides/django) ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with Python and Neon using psycopg (v3)](https://github.com/neondatabase/examples/tree/main/with-python-psycopg) - [Get started with Python and Neon using psycopg2](https://github.com/neondatabase/examples/tree/main/with-python-psycopg2) - [Get started with Python and Neon using asyncpg](https://github.com/neondatabase/examples/tree/main/with-python-asyncpg) ## Resources - [Psycopg 3 documentation](https://www.psycopg.org/psycopg3/docs/) - [Psycopg 2 documentation](https://www.psycopg.org/docs/) - [Asyncpg documentation](https://magicstack.github.io/asyncpg/current/) - [Building an API with Django, Django REST Framework, and Neon Postgres](https://neon.com/guides/django-rest-api) --- # Source: https://neon.com/llms/guides-quarkus-jdbc.txt # Connect Quarkus (JDBC) to Neon > This document guides users on configuring and establishing a connection between Quarkus applications and Neon databases using JDBC, detailing the necessary steps and configurations specific to Neon's environment. ## Source - [Connect Quarkus (JDBC) to Neon HTML](https://neon.com/docs/guides/quarkus-jdbc): The original HTML version of this documentation [Quarkus](https://quarkus.io/) is a Java framework optimized for cloud environments. This guide shows how to connect to Neon from a Quarkus project using the PostgreSQL JDBC driver. To connect to Neon from a Quarkus application using the Postgres JDBC Driver: ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Quarkus project Create a Quarkus project using the [Quarkus CLI](https://quarkus.io/guides/cli-tooling): ```shell quarkus create app neon-with-quarkus-jdbc \ --name neon-with-quarkus-jdbc \ --package-name com.neon.tech \ --extensions jdbc-postgresql,quarkus-agroal,resteasy-reactive ``` You now have a Quarkus project in a folder named `neon-with-quarkus-jdbc` with the PostgreSQL JDBC driver, Agroal datasource implementation, and RESTEasy Reactive extensions installed. ## Configure a PostgreSQL data source Create a `.env` file in the root of your Quarkus project directory. Configure a JDBC data source using the components of your Neon database connection string and specifying the database kind as shown: ```shell QUARKUS_DATASOURCE_DB_KIND=postgresql QUARKUS_DATASOURCE_USERNAME=[user] QUARKUS_DATASOURCE_PASSWORD=[password] QUARKUS_DATASOURCE_JDBC_URL=jdbc:postgresql://[neon_hostname]/[dbname]?sslmode=require&channelBinding=require ``` **Note**: You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Use the PostgreSQL JDBC Driver Create a `PostgresResource.java` file in the same directory as the `GreetingResource.java` that was generated by Quarkus during project creation. Paste the following content into the `PostgresResource.java` file: ```java package com.neon.tech; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import javax.sql.DataSource; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/postgres") public class PostgresResource { @Inject DataSource dataSource; @GET @Path("/version") @Produces(MediaType.TEXT_PLAIN) public String getVersion() { try (Connection connection = dataSource.getConnection(); Statement statement = connection.createStatement()) { ResultSet resultSet = statement.executeQuery("SELECT version()"); if (resultSet.next()) { return resultSet.getString(1); } } catch (SQLException e) { e.printStackTrace(); } return null; } } ``` This code defines a HTTP endpoint that will query the database version and return it as a response to incoming requests. ## Run the application Start the application in development mode using the Quarkus CLI from the root of the project directory: ```shell quarkus dev ``` Visit [localhost:8080/postgres/version](http://localhost:8080/postgres/version) in your web browser. Your Neon database's Postgres version will be returned. For example: ``` PostgreSQL 17.5 (6bc9ef8) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14+deb12u1) 12.2.0, 64-bit ``` --- # Source: https://neon.com/llms/guides-quarkus-reactive.txt # Connect Quarkus (Reactive) to Neon > This document guides users on connecting Quarkus (Reactive) applications to Neon, detailing configuration steps and necessary dependencies for seamless integration. ## Source - [Connect Quarkus (Reactive) to Neon HTML](https://neon.com/docs/guides/quarkus-reactive): The original HTML version of this documentation [Quarkus](https://quarkus.io/) is a Java framework optimized for cloud environments. This guide shows how to connect to Neon from a Quarkus project using a Reactive SQL Client. To connect to Neon from a Quarkus application: ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Quarkus project Create a Quarkus project using the [Quarkus CLI](https://quarkus.io/guides/cli-tooling): ```shell quarkus create app neon-with-quarkus \ --name neon-with-quarkus \ --package-name com.neon.tech \ --extensions reactive-pg-client,resteasy-reactive ``` You now have a Quarkus project in a folder named `neon-with-quarkus` with the Reactive Postgres client and RESTEasy Reactive extensions installed. ## Configure a PostgreSQL data source Create a `.env` file in the root of your Quarkus project directory. Configure a reactive data source using your Neon database connection string and specifying the database kind as shown: ```shell QUARKUS_DATASOURCE_REACTIVE_URL=postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require ``` **Note**: You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Use the Reactive PostgreSQL client Create a `PostgresResource.java` file in the same directory as the `GreetingResource.java` that was generated by Quarkus during project creation. Paste the following content into the `PostgresResource.java` file: ```java package com.neon.tech; import jakarta.inject.Inject; import io.smallrye.mutiny.Multi; import io.vertx.mutiny.sqlclient.Row; import io.vertx.mutiny.sqlclient.RowSet; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/postgres") public class PostgresResource { @Inject io.vertx.mutiny.pgclient.PgPool client; @GET @Path("/version") @Produces(MediaType.TEXT_PLAIN) public Multi getVersion() { return client.query("SELECT version()") .execute() .onItem().transformToMulti(this::extractVersion); } private Multi extractVersion(RowSet rowSet) { return Multi.createFrom().iterable(rowSet) .map(r -> r.getValue(0).toString()); } } ``` This code defines a HTTP endpoint that will query the database version and return it as a response to incoming requests. ## Run the application Start the application in development mode using the Quarkus CLI from the root of the project directory: ```shell quarkus dev ``` Visit [localhost:8080/postgres/version](http://localhost:8080/postgres/version) in your web browser. Your Neon database's Postgres version will be returned. For example: ``` PostgreSQL 17.5 (6bc9ef8) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14+deb12u1) 12.2.0, 64-bit ``` --- # Source: https://neon.com/llms/guides-rails-migrations.txt # Schema migration with Neon Postgres and Ruby on Rails > This document guides users through performing schema migrations in a Neon Postgres database using Ruby on Rails, detailing the necessary steps and configurations for seamless integration and execution. ## Source - [Schema migration with Neon Postgres and Ruby on Rails HTML](https://neon.com/docs/guides/rails-migrations): The original HTML version of this documentation [Ruby on Rails](https://rubyonrails.org/) is a popular web application framework for Ruby developers. It provides an ORM (Object-Relational Mapping) layer called `Active Record`, that simplifies database interactions and schema management. Rails also includes a powerful migration system that allows you to define and manage database schema changes over time. This guide demonstrates how to run schema migrations in your Ruby on Rails project backed by the `Neon` Postgres database. We'll create a simple Rails application and walk through the process of setting up the database, defining models, and generating and running migrations to manage schema changes. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Ruby](https://www.ruby-lang.org/) installed on your local machine. You can install Ruby using the instructions provided on the [official Ruby website](https://www.ruby-lang.org/en/documentation/installation/). We recommend using a newer version of Ruby, 3.0 or higher. - [Rails](https://rubyonrails.org/) installed on your local machine. You can install Rails by running `gem install rails`. We recommend using Rails 6 or higher. This project uses `Rails 7.1.3.2`. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the **New Project** button to create a new one. ### Retrieve your Neon database connection string You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. **Note**: Neon supports both direct and pooled database connection strings. You can find a connection string for your database by clicking the **Connect** button on your **Project Dashboard**. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting up the Rails project ### Create a new Rails project Open your terminal and run the following command to create a new Rails project: ```bash rails new guide-neon-rails --database=postgresql ``` This command creates a new Rails project named `guide-neon-rails` with Postgres as the default database. It will also generate the necessary project files and directories, and install the required dependencies. ### Set up the Database configuration Create a `.env` file in the project root directory and add the `DATABASE_URL` environment variable to it. Use the connection string that you obtained from the Neon Console earlier: ```bash # .env DATABASE_URL=NEON_POSTGRES_CONNECTION_STRING ``` For Rails to load the environment variables automatically from the `.env` file, add the `dotenv-rails` gem to the `Gemfile` at the root of your project: ```ruby # Gemfile gem 'dotenv-rails', groups: [:development, :test] ``` Then, run `bundle install` to install the gem. Finally, we open the `config/database.yml` file in your project directory and update the `default` section so that Rails uses the `DATABASE_URL` environment variable to connect to the `Neon` database. ```yaml # database.yml default: &default adapter: postgresql encoding: unicode pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %> url: <%= ENV['DATABASE_URL'] %> development: <<: *default test: <<: *default production: <<: *default ``` ## Defining data models and running migrations ### Generate models and migrations Next, we will create the data models for our application. Run the following commands to generate the `Author` and `Book` models: ```bash rails generate model Author name:string bio:text rails generate model Book title:string author:references ``` These commands generate model files and the corresponding migration files in the `app/models` and `db/migrate` directories, respectively. ### Run the migrations To run the migrations and create the corresponding tables in the Neon Postgres database, run the following command: ```bash rails db:migrate ``` This command executes the migration files and creates the `authors` and `books` tables in the database. Additionally, it also creates some tables for its internal bookkeeping. ### Seed the database To populate the database with some initial data, open the `db/seeds.rb` file and add the following code: ```ruby # db/seeds.rb # Find or create authors authors_data = [ { name: "J.R.R. Tolkien", bio: "The creator of Middle-earth and author of The Lord of the Rings." }, { name: "George R.R. Martin", bio: "The author of the epic fantasy series A Song of Ice and Fire." }, { name: "J.K. Rowling", bio: "The creator of the Harry Potter series." } ] authors_data.each do |author_attrs| Author.find_or_create_by(name: author_attrs[:name]) do |author| author.bio = author_attrs[:bio] end end # Find or create books books_data = [ { title: "The Fellowship of the Ring", author_name: "J.R.R. Tolkien" }, { title: "The Two Towers", author_name: "J.R.R. Tolkien" }, { title: "The Return of the King", author_name: "J.R.R. Tolkien" }, { title: "A Game of Thrones", author_name: "George R.R. Martin" }, { title: "A Clash of Kings", author_name: "George R.R. Martin" }, { title: "Harry Potter and the Philosopher's Stone", author_name: "J.K. Rowling" }, { title: "Harry Potter and the Chamber of Secrets", author_name: "J.K. Rowling" } ] books_data.each do |book_attrs| author = Author.find_by(name: book_attrs[:author_name]) Book.find_or_create_by(title: book_attrs[:title], author: author) end ``` To run the seed file and populate the database with the initial data, run the following command: ```bash rails db:seed ``` This command inserts the sample authors and books data into the database. Note that the script looks for existing records before creating new ones, so you can run it multiple times without duplicating the data. ## Implement the application ### Create controllers and views Next, we will create controllers and views to display the authors and books in our application. Run the following commands to generate the controllers: ```bash rails generate controller Authors index rails generate controller Books index ``` These commands generate controller files and corresponding view files in the `app/controllers` and `app/views` directories. Open the `app/controllers/authors_controller.rb` file and update the `index` action: ```ruby # app/controllers/authors_controller.rb class AuthorsController < ApplicationController def index @authors = Author.all end end ``` Similarly, open the `app/controllers/books_controller.rb` file and update the `index` action: ```ruby # app/controllers/books_controller.rb class BooksController < ApplicationController def index @author = Author.find(params[:author_id]) @books = @author.books end end ``` Now, we update the corresponding views to display the data. Open the `app/views/authors/index.html.erb` file and add the following code: ```erb

    Authors

      <% @authors.each do |author| %>
    • <%= author.name %> - <%= link_to 'Books', author_books_path(author_id: author.id) %>
    • <% end %>
    ``` Open the `app/views/books/index.html.erb` file and add the following code: ```erb

    Books by <%= @author.name %>

      <% @books.each do |book| %>
    • <%= book.title %>
    • <% end %>
    ``` ### Define routes Open the `config/routes.rb` file and define the routes for the authors and books: ```ruby # config/routes.rb Rails.application.routes.draw do resources :authors, only: [:index] get '/books/:author_id', to: 'books#index', as: 'author_books' end ``` ### Run the Rails server To start the Rails server and test the application, run the following command: ```bash rails server ``` Navigate to the url `http://localhost:3000/authors` in your browser to view the list of authors. You can also view the books by a specific author by clicking on the "Books" link next to each author, which takes you to the `http://localhost:3000/books/:author_id` route. ## Applying schema changes We will demonstrate how to handle schema changes by adding a new field `country` to the `Author` model, to store the author's country of origin. ### Generate a migration To generate a migration file for adding the `country` field to the `authors` table, run the following command: ```bash rails generate migration AddCountryToAuthors country:string ``` This command generates a new migration file in the `db/migrate` directory. ### Run the migration To run the migration and apply the schema change, run the following command: ```bash rails db:migrate ``` This command executes the migration file and adds the `country` column to the `authors` table in the database. ### Update the existing records To update the existing records with the author's country, open the `db/seeds.rb` file and update the authors data with the country information: ```ruby authors_data = [ { name: "J.R.R. Tolkien", bio: "The creator of Middle-earth and author of The Lord of the Rings.", country: "United Kingdom" }, { name: "George R.R. Martin", bio: "The author of the epic fantasy series A Song of Ice and Fire.", country: "United States" }, { name: "J.K. Rowling", bio: "The creator of the Harry Potter series.", country: "United Kingdom" } ] authors_data.each do |author_attrs| author = Author.find_or_initialize_by(name: author_attrs[:name]) author.assign_attributes(author_attrs) author.save if author.changed? end ``` Run the seed file again to update the existing records in the database: ```bash rails db:seed ``` ### Test the schema change Update the `app/views/authors/index.html.erb` file to display the country alongside each author: ```erb

    Authors

      <% @authors.each do |author| %>
    • <%= author.name %> - <%= author.country %> - <%= link_to 'Books', author_books_path(author_id: author.id) %>
    • <% end %>
    ``` Now, restart the Rails server: ```bash rails server ``` Navigate to the url `http://localhost:3000/authors` to view the list of authors. The `country` field is now available for each author, reflecting the schema change. ## Conclusion In this guide, we demonstrated how to set up a Ruby on Rails project with Neon Postgres, define database models, generate migrations, and run them. Rails' Active Record ORM and migration system make it easy to interact with the database and manage schema evolution over time. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Rails](https://github.com/neondatabase/guide-neon-rails): Run migrations in a Neon-Rails project ## Resources For more information on the tools and concepts used in this guide, refer to the following resources: - [Ruby on Rails Guides](https://guides.rubyonrails.org/) - [Active Record Migrations](https://guides.rubyonrails.org/active_record_migrations.html) - [Neon Postgres](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/guides-railway.txt # Use Neon Postgres with Railway > The document outlines the process for integrating Neon Postgres with Railway, detailing steps for setting up a Neon database and configuring it within the Railway platform for seamless deployment and management. ## Source - [Use Neon Postgres with Railway HTML](https://neon.com/docs/guides/railway): The original HTML version of this documentation [Railway](https://railway.app) is an application deployment platform that allows users to develop web applications locally, provision infrastructure and then deploy to the cloud. Railway integrates with GitHub for continuous deployment and supports a variety of programming languages and frameworks. This guide shows how to deploy a simple Node.js application connected to a Neon Postgres database on Railway. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Railway account. If you do not have one, sign up at [Railway](https://railway.app) to get started. - A GitHub account. Railway integrates with Gitub for continuous deployment. So, you'd need a GitHub account to upload your application code. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Click the `New Project` button to create a new project. 3. From your project dashboard, navigate to the `SQL Editor` from the sidebar, and run the following SQL command to create a new table in your database: ```sql CREATE TABLE plant_care_log ( id SERIAL PRIMARY KEY, plant_name VARCHAR(255) NOT NULL, care_date DATE NOT NULL ); ``` Next, we insert some sample data into the `plant_care_log` table, so we can query it later: ```sql INSERT INTO plant_care_log (plant_name, care_date) VALUES ('Monstera', '2024-01-10'), ('Fiddle Leaf Fig', '2024-01-15'), ('Snake Plant', '2024-01-20'), ('Spider Plant', '2024-01-25'), ('Pothos', '2024-01-30'); ``` ### Retrieve your Neon database connection string You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Implementing the Node.js application We'll create a simple Express application that connects to our Neon database and retrieves the list of plants tended to within the last month. Run the following commands in a terminal to set it up. ```bash mkdir neon-railway-example && cd neon-railway-example npm init -y && npm pkg set type="module" npm install express pg touch .env ``` We use the `npm pkg set type="module"` command to enable ES6 module support in our project. We also create a new `.env` file to store the `DATABASE_URL` environment variable, which we'll use to connect to our Neon database. Lastly, we install the `pg` library which is the Postgres driver we use to connect to our database. ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` Now, create a new file named `index.js` and add the following code: ```javascript import express from 'express'; import pkg from 'pg'; const app = express(); const port = process.env.PORT || 3000; // Parse JSON bodies for this app app.use(express.json()); // Create a new pool using your Neon database connection string const { Pool } = pkg; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); app.get('/', async (req, res) => { try { // Fetch the list of plants from your database using the postgres connection const { rows } = await pool.query('SELECT * FROM plant_care_log;'); res.json(rows); } catch (error) { console.error('Failed to fetch plants', error); res.status(500).json({ error: 'Internal Server Error' }); } }); // Start the server app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); ``` This code sets up an Express server that listens for requests on port 3000. When a request is made to the `URL`, the server queries the `plant_care_log` table in your Neon database and returns the results as JSON. We can test this application locally by running: ```bash node --env-file=.env index.js ``` Now, navigate to `http://localhost:3000/` in your browser to check it returns the sample data from the `plant_care_log` table. ## Push Your application to GitHub To deploy your application to Railway, you need to push your code to a GitHub repository. Create a new repository on GitHub by navigating to [GitHub - New Repo](https://github.com/new). You can then push your code to the new repository using the following commands: ```bash echo "node_modules/" > .gitignore && echo ".env" >> .gitignore echo "# neon-railway-example" >> README.md git init && git add . && git commit -m "Initial commit" git branch -M main git remote add origin YOUR_GITHUB_REPO_URL git push -u origin main ``` You can visit the GitHub repository to verify that your code has been pushed successfully. ## Deploying to Railway ### Creating a new Railway project Log in to your Railway account and navigate to the dashboard. Click on the `New Project` button and select the `Deploy from GitHub repo` option. Pick the repository you created above, which sets off a Railway deployment. Railway automatically figures out the type of application you're deploying and sets up the necessary build and start commands. However, we still need to add the `DATABASE_URL` environment variable to connect to our Neon database. Select the project and navigate to the `Variables` tab. Add a new variable named `DATABASE_URL` and set its value to your Neon database connection string. You can redeploy the project by clicking on `Redeploy` from the context menu of the latest deployment. ### Verify Deployment Once the deployment completes and is marked as `ACTIVE`, Railway provides a public URL for accessing the web service. Visit the provided URL to verify that your application is running and can connect to your Neon database. Whenever you update your code and push it to your GitHub repository, Railway will automatically build and deploy the changes to your web service. ## Removing Your Application and Neon Project To remove your application from Railway, select the project and navigate to the `Settings` tab. Scroll down to the end to find the "Delete Service" option. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon Postgres with Railway](https://github.com/neondatabase/examples/tree/main/deploy-with-railway): Connect a Neon Postgres database to your Node application deployed with Railway ## Resources - [Railway platform](https://railway.app/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-react-router.txt # Connect a React Router application to Neon > This document guides users on integrating a React Router application with Neon by detailing the necessary steps and configurations for establishing a connection. ## Source - [Connect a React Router application to Neon HTML](https://neon.com/docs/guides/react-router): The original HTML version of this documentation [React Router](https://reactrouter.com/home) is a powerful routing library for React that also includes modern, full-stack framework features. This guide explains how to connect a React Router application to Neon using a server-side `loader` function. To create a Neon project and access it from a React Router application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a React Router project and add dependencies 1. Create a React Router project using the following command: ```shell npx create-react-router@latest with-react-router --yes cd with-react-router ``` 2. Add project dependencies using one of the following commands. Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project's root directory and add your Neon connection string to it. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client With React Router, data fetching is handled in "Route Modules". We will create a new route that connects to Neon in its `loader` function, which runs on the server. ### 1. Define the route First, define a new route in `app/routes.ts`. This tells React Router to render our new component when a user visits the `/version` path. ```typescript {5} filename=app/routes.ts import { type RouteConfig, route, index } from '@react-router/dev/routes'; export default [ index('./home.tsx'), route('version', './routes/version.tsx'), ] satisfies RouteConfig; ``` ### 2. Create the route module Create a new file at `app/routes/version.tsx`. This file will contain both the server-side data loader and the client-side React component. The `loader` function will connect to Neon, query the database version, and pass the result to the `Component` via the `loaderData` prop. Tab: node-postgres ``` ```tsx filename=app/routes/version.tsx import postgres from 'postgres'; import type { Route } from './+types/version'; // The loader function runs on the server export async function loader() { const sql = postgres(process.env.DATABASE_URL as string); const response = await sql`SELECT version()`; return { version: response[0].version }; } // The component runs in the browser export default function Version({ loaderData }: Route.ComponentProps) { return (

    Database Version

    {loaderData.version}

    ); } ``` ## Run the app ### Generate types Run the following command to generate types for your routes: ```shell npm run typecheck ``` ### Start the development server With the types generated, start the development server: ```shell npm run dev ``` Now, navigate to [http://localhost:5173/version](http://localhost:5173/version) in your browser. You should see a page displaying the version of your Neon Postgres database. ```text Database Version PostgreSQL 17.5 (6bc9ef8) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14+deb12u1) 12.2.0, 64-bit ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with React Router and Neon](https://github.com/neondatabase/examples/tree/main/with-react-router) --- # Source: https://neon.com/llms/guides-react.txt # Connect a React application to Neon > The document outlines the steps for connecting a React application to a Neon database, detailing the configuration of environment variables and the use of specific libraries to establish a secure database connection. ## Source - [Connect a React application to Neon HTML](https://neon.com/docs/guides/react): The original HTML version of this documentation React by Facebook is an open-source front-end JavaScript library for building user interfaces based on components. Neon Postgres should be accessed from the server side in React applications. Using the following React meta-frameworks, you can easily configure a server-side connection to a Neon Postgres database. ## React Meta-Frameworks Find detailed instructions for connecting to Neon from various React meta-frameworks. - [Next.js](https://neon.com/docs/guides/nextjs): Connect a Next.js application to Neon - [Remix](https://neon.com/docs/guides/remix): Connect a Remix application to Neon - [Sveltekit](https://neon.com/docs/guides/sveltekit): Connect a Sveltekit application to Neon --- # Source: https://neon.com/llms/guides-read-only-access-read-replicas.txt # Provide read-only access with Read Replicas > The document explains how to configure read replicas in Neon to enable read-only access, detailing the steps for setting up and managing these replicas within the Neon database environment. ## Source - [Provide read-only access with Read Replicas HTML](https://neon.com/docs/guides/read-only-access-read-replicas): The original HTML version of this documentation When you create a read replica in Neon, you gain the ability to provide read-only access to your data. This is particularly useful when you want to grant access to users, partners, or third-party applications that only need to run queries to analyze data, generate reports, or audit your database. Since no write operations are permitted on read replicas, it ensures the integrity of your data while allowing others to work with up-to-date information. Suppose you need to give a partner read-only access to your sales data so they can generate custom reports for your business. Here's how you would go about doing that: 1. **Create a read replica** **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. Follow these steps to create a read replica for your database branch: - In the Neon Console, go to **Branches**. - Select the branch that contains your data. - Click **Add Read Replica** to create a dedicated compute instance for read operations. 2. **Provide the connection string** Once the read replica is created, obtain the connection string from the Neon Console: - You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. Select the branch, the database, and the role. - Choose **Replica** compute under the compute settings. - Copy the connection string and provide it to your partner. The connection string might look something like this: ```bash postgresql://partner:partner_password@ep-read-replica-12345.us-east-2.aws.neon.tech/sales_db?sslmode=require&channel_binding=require ``` 3. **Read-only access for the partner** The partner can now use this connection string to connect to the read replica and run any `SELECT` queries they need for reporting purposes, such as: ```sql SELECT product_id, SUM(sale_amount) as total_sales FROM sales WHERE sale_date >= (CURRENT_DATE - INTERVAL '1 year') GROUP BY product_id; ``` This query will run on the read replica without impacting the performance of your production database, since read replicas run on an isolated read-only compute. 4. **Write operations are not permitted** Since the connection is to a read replica, the partner will not be able to run any write operations. If they attempt to run a `DELETE`, `INSERT`, or `UPDATE` query, they will receive an error message like this: ```bash ERROR: cannot execute INSERT in a read-only transaction (SQLSTATE 25006) ``` --- # Source: https://neon.com/llms/guides-read-replica-adhoc-queries.txt # Run ad-hoc queries with Read Replicas > The document explains how to execute ad-hoc queries using read replicas in Neon, detailing the steps to configure and utilize read replicas for query distribution and load balancing. ## Source - [Run ad-hoc queries with Read Replicas HTML](https://neon.com/docs/guides/read-replica-adhoc-queries): The original HTML version of this documentation In many situations, you may need to run quick, one-time queries to retrieve specific data or test an idea. These are known as **ad-hoc queries**. Ad-hoc queries are particularly useful for tasks like analytics, troubleshooting, or exploring your data without setting up complex reports. However, running resource-intensive queries on your production database can degrade performance, especially if they target heavily used tables. This is where **Neon Read Replicas** come in handy. With read replicas, you can quickly create a replica that runs on dedicated read-only compute, allowing you to run ad-hoc queries without impacting your primary database's performance. Once you're done, the read replica can automatically scale to zero, or you can delete it. The key advantages of using Neon Read Replicas for ad-hoc queries include the following: - You can add a fully functional read replica in seconds. - There's no additional storage cost or data replication, as the replica uses the same storage as your primary compute. - The read replica compute automatically scales to zero based on your [scale to zero](https://neon.com/docs/introduction/scale-to-zero) settings. A compute suspends due to inactivity after 5 minutes of inactivity. - You can remove a read replica as quickly as you created it or just leave it for next time. The compute will remain suspended until you run your next query. ## What is an ad-hoc query? An ad-hoc query is an impromptu query used to retrieve specific data from your database. These queries are not part of routine reporting or pre-written scripts; they are created on the fly to answer immediate questions or perform temporary analysis. For example, if you want to quickly calculate the total sales for a product over the last month, you might write an SQL query like this: ```sql SELECT product_id, SUM(sale_amount) FROM sales WHERE sale_date >= (CURRENT_DATE - INTERVAL '1 month') GROUP BY product_id; ``` ## Why run ad-hoc queries on a read replica? Running ad-hoc queries on a read replica can help you: - **Avoid performance issues**: Heavy ad-hoc queries, such as large aggregations or joins, can slow down your production database. A read replica offloads that work. - **Isolate query load**: Since ad-hoc queries may be exploratory and involve significant data scanning, running them on a replica prevents unplanned queries from affecting your production traffic. - **Ensure data consistency**: With Neon, read replicas access the same data as your primary compute, ensuring your ad-hoc queries reflect up-to-date information. ## Setting up a read replica for ad-hoc queries **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. You can add a read replica compute to any branch in your Neon project by following these steps: 1. In the Neon Console, select **Branches**. 2. Select the branch where your database resides. 3. Click **Add Read Replica**. 4. On the **Add new copmpute** dialog, select **Read replica** as the **Compute type**. 5. Specify the **Compute size settings**. You can configure a fixed-size compute with a specific amount of vCPU and RAM (the default) or enable autoscaling by configuring a minimum and maximum compute size using the slider. On paid plans, you can adjust the **Scale to zero time** setting, which controls whether a compute suspends due to inactivity after 5 minutes. **Note**: The compute size configuration determines the processing power of your database. 6. When you finish making your selections, click **Create**. Your read replica is provisioned and appears on the **Computes** tab of the **Branches** page. The following section describes how to connect to your read replica. Alternatively, you can create read replicas using the [Neon CLI](https://neon.com/docs/reference/cli-branches#create) or [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint). Tab: CLI ```bash neon branches add-compute mybranch --type read_only ``` Tab: API ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/late-bar-27572981/endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_only", "branch_id": "br-young-fire-15282225" } } ' | jq ``` ### Connect to the read replica 1. Once the read replica is created, go to your **Project Dashboard**. 2. Under **Connection Details**, select the replica compute. 3. Copy the connection string and use it to connect to the replica, either via `psql` or your application. Your connection string will look something like this: ```bash postgresql://user:password@ep-read-replica-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ### Running ad-hoc queries Once connected to the read replica, you can run your ad-hoc queries without worrying about impacting your production database. For example, let's say you need to run a quick analysis to get sales data for specific products over the past year: ```sql SELECT product_id, SUM(sale_amount) AS total_sales FROM sales WHERE sale_date >= (CURRENT_DATE - INTERVAL '1 year') GROUP BY product_id; ``` This query will execute on the read replica, leaving your primary database free to handle regular traffic and operations. ## Ad-hoc query scenarios Here are a few common scenarios where ad-hoc queries on a read replica can be useful: - **Sales Analysis**: Calculate total sales for a product or category without affecting your production system. - **Data Exploration**: Explore data patterns, such as checking anomalies or trends in your dataset. - **Custom Reporting**: Generate one-time reports for business meetings or audits without waiting for a prebuilt report. - **Checking queries for write attempts**: Since read replicas are designed for read-only operations, any unintended write actions will result in an error. For example, if someone tries to insert data into the sales table on the read replica, they will get an error message like this: ```bash ERROR: cannot execute INSERT in a read-only transaction (SQLSTATE 25006) ``` This ensures that the replica is used solely for reading data, preserving the integrity of your production system. --- # Source: https://neon.com/llms/guides-read-replica-data-analysis.txt # Run analytics queries with Read Replicas > The document explains how to utilize Neon's read replicas to execute analytics queries, enabling users to offload query workloads from primary databases to improve performance and scalability. ## Source - [Run analytics queries with Read Replicas HTML](https://neon.com/docs/guides/read-replica-data-analysis): The original HTML version of this documentation With Neon's read replica feature, you can instantly create a dedicated read replica for running data-intensive analytics or reporting queries. This allows you to avoid disruption or performance degradation on your production database. A read replica reads data from the same source as your primary read-write compute. There's no data replication, so creating a read replica is a near-instant process. For more information about Neon's read replica architecture, see [Read replicas](https://neon.com/docs/introduction/read-replicas). ## Scenario Suppose you have a `sales` table in your production database. The table and data might look something like this: ```sql CREATE TABLE sales ( id SERIAL PRIMARY KEY, product_id INT NOT NULL, sale_amount DECIMAL(10,2) NOT NULL, sale_date DATE NOT NULL ); INSERT INTO sales (product_id, sale_amount, sale_date) VALUES (1, 20.50, '2022-07-24'), (2, 35.99, '2022-08-24'), (1, 20.50, '2022-09-24'), (3, 15.00, '2023-01-24'), (1, 20.50, '2023-04-24'); ... ``` You want to find the total sale amount for each product in the past year, but due to the large number of products and sales in your database, you know this is a costly query that could impact performance on your production system. This guide walks you through creating a read replica, connecting to it, running your query, and optionally deleting the read replica when finished. **Tip** Metabase Analytics Use Case: [Metabase](https://www.metabase.com/) is an open-source business intelligence (BI) company that provides a platform for visualizing and analyzing data. With Metabase and Neon, you can: - Create a read replica in Neon - Configure [Autoscaling](https://neon.com/docs/introduction/autoscaling) to define minimum and maximum limits for compute resources - Configure [scale to zero](https://neon.com/docs/introduction/scale-to-zero) to define whether the read replica scales to zero when not being used - Configure a connection to the read replica from Metabase. With this setup, your read replica only wakes up when Metabase connects, scales to sync job requirements without affecting your production database, and scales back to zero after the job sync is finished. ## Create a read replica Creating a read replica involves adding a read replica compute to a branch. **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. You can add a read replica compute- to any branch in your Neon project by following these steps: 1. In the Neon Console, select **Branches**. 2. Select the branch where your database resides. 3. Click **Add Read Replica**. 4. On the **Add new copmpute** dialog, select **Read replica** as the **Compute type**. 5. Specify the **Compute size settings**. You can configure a fixed size compute with a specific amount of vCPU and RAM (the default) or enable autoscaling by configuring a minimum and maximum compute size using the slider. You can also configure a **Scale to zero** setting, which determines whether a compute suspends due to inactivity after 5 minutes. **Note**: The compute size configuration determines the processing power of your database. 6. When you finish making your selections, click **Create**. Your read replica is provisioned and appears on the **Computes** tab of the **Branches** page. The following section describes how to connect to your read replica. Alternatively, you can create read replicas using the [Neon CLI](https://neon.com/docs/reference/cli-branches#create) or [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint), providing the flexibility required to integrate read replicas into your workflows or CI/CD processes. Tab: CLI ```bash neon branches add-compute mybranch --type read_only ``` Tab: API ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/late-bar-27572981/endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_only", "branch_id": "br-young-fire-15282225" } } ' | jq ``` ## Connect to the read replica Connecting to a read replica is the same as connecting to any branch, except you connect via a read replica compute instead of your primary read-write compute. The following steps describe how to connect to your read replica with connection details obtained from the Neon Console. 1. Click the **Connect** button on your **Project Dashboard**. On the **Connect to your database modal**, select the branch, the database, and the role you want to connect with. 1. Under **Compute**, select the **Replica** compute. 1. Select a **Database** and the **Role** you want to connect with. 1. Copy the connection string. This is the information you need to connect to the read replica from your client or application. The connection string appears similar to the following: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` If you expect a high number of connections, enable the **Connection pooling** toggle to add the `-pooler` flag to the connection string. The information in your connection string corresponds to the following connection details: - role: `alex` - password:`AbC123dEf` - hostname: `ep-cool-darkness-123456.us-east-2.aws.neon.tech` - database name: `dbname`. Your database name may differ. When you connect to a read replica, no write operations are permitted on the connection. 1. Connect to your application from a client such as `psql` or add the connection details to your application. For example, to connect using `psql`, issue the following command: ```bash psql postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ## Run the analytics query on the read replica An analytics query on your `sales` table might look something like this: ```sql SELECT product_id, SUM(sale_amount) as total_sales FROM sales WHERE sale_date >= (CURRENT_DATE - INTERVAL '1 year') GROUP BY product_id; ``` If you have a lot of products and sales, this query might impact performance on your production system, but running the query on your read replica, which has its own dedicated compute resources, causes no disruption. ## Delete the read replica When you are finished running analytics queries, you can delete the read replica if it's no longer required. Deleting a read replica is a permanent action, but you can quickly create a new read replica when you need one. **Tip**: Alternatively, you can let the read replica scale to zero so that it's readily available the next time you need it. Neon's [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature will suspend the compute until the next time you access it. Scale to zero occurs automatically after 5 minutes of inactivity. To delete a read replica: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, find the read replica you want to delete. 1. Click **Edit** → **Delete compute**. --- # Source: https://neon.com/llms/guides-read-replica-guide.txt # Create and manage Read Replicas > The document details the process for creating and managing read replicas in Neon, enabling users to distribute read workloads across multiple database instances to enhance performance and scalability. ## Source - [Create and manage Read Replicas HTML](https://neon.com/docs/guides/read-replica-guide): The original HTML version of this documentation [Read replicas](https://neon.com/docs/introduction/read-replicas) are supported with all Neon plans. The Free plan is limited to a maximum of 3 read replica computes per project. This guide steps you through the process of creating and managing read replicas. The general purpose of read replicas is to segregate read-only work from your production database operations. This can be applied to different uses cases, such as: - **Horizontal scaling**: Distributing read requests across replicas to improve performance and increase throughput - **Analytics queries**: Offloading resource-intensive analytics and reporting workloads to reduce load on the primary compute - **Read-only access**: Granting read-only access to users or applications that don't require write permissions Regardless of the application, the steps for creating, configuring, and connecting to a read replica are the same. You can create one or more read replicas for any branch in your Neon project and configure the vCPU and memory allocated to each. Neon's _Autoscaling_ and _Scale to Zero_ features are also supported, providing you with control over read replica compute usage. ## Prerequisites - A Neon account - A [Neon project](https://neon.com/docs/manage/projects#create-a-project) ## Create a read replica Creating a read replica involves adding a read replica compute to a branch. You can add a read replica compute to any branch in your Neon project using the Neon Console, [Neon CLI](https://neon.com/docs/reference/cli-branches#create), or [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint). **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. Tab: Console To create a read replica from the Neon Console: 1. In the Neon Console, select **Branches**. 2. Select the branch where your database resides. 3. Click **Add Read Replica**. 4. On the **Add new compute** dialog, select **Read replica** as the **Compute type**. 5. Specify the **Compute size settings**. You can configure a **Fixed Size** compute with a specific amount of vCPU and RAM (the default) or enable autoscaling by configuring a minimum and maximum compute size. You can also configure the **Suspend compute after inactivity** setting, which is the amount of idle time after which your compute is automatically suspended. The default setting is 5 minutes. **Note**: The compute size configuration determines the processing power of your database. 6. When you finish making your selections, click **Create**. In a few seconds, your read replica is provisioned and appears on the **Computes** tab on the **Branches** page. The following section describes how to connect to your read replica. Tab: CLI To create a read replica using the Neon CLI, use the [branches](https://neon.com/docs/reference/cli-branches) command, specifying the `add-compute` subcommand with `--type read_only`. If you have more than one Neon project, also include the `--project-id` option. ```bash neon branches add-compute mybranch --type read_only ``` Tab: API To create a read replica compute using the Neon API, use the [Create endpoint](https://api-docs.neon.tech/reference/createprojectendpoint) method. The `type` attribute in the following example specifies `read_only`, which creates a read replica compute. For information about obtaining the required `project_id` and `branch_id` parameters, refer to [Create an endpoint](https://api-docs.neon.tech/reference/createprojectendpoint), in the _Neon API reference_. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects//endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_only", "branch_id": "" } } ' | jq ``` ## Connect to a read replica Connecting to a read replica is the same as connecting to any branch, except you connect via a read replica compute instead of your primary read-write compute. The following steps describe how to connect to your read replica with connection details obtained from the Neon Console. 1. Click the **Connect** button on your **Project Dashboard**. On the **Connect to your database modal**, select the branch, the database, and the role you want to connect with. 1. Under **Compute**, select a **Replica**. 1. Select a connection string or a code example from the drop-down menu and copy it. This is the information you need to connect to the read replica from your client or application. A **psql** connection string appears similar to the following: ```bash postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require ``` If you expect a high number of connections, enable the **Connection pooling** toggle to add the `-pooler` flag to the connection string or example. **Note**: Write operations are not permitted on a read replica connection. ## View read replicas You can view read replicas using the Neon Console or [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint). Tab: Console To view read replicas for a branch, select **Branches** in the Neon Console, and select a branch. Read replicas are listed on the **Computes** tab. Tab: API To view read replica computes with the [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint), use the [Get endpoints](https://api-docs.neon.tech/reference/listprojectendpoints) method. ```bash curl -X 'GET' \ 'https://console.neon.tech/api/v2/projects//endpoints' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` For information about obtaining the required `project_id` parameter for this command, refer to [Get endpoints](https://api-docs.neon.tech/reference/listprojectendpoints), in the _Neon API reference_. For information about obtaining an Neon API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the response body for this method, read replica computes are identified by the `type` value, which is `read_only`. ## Edit a read replica You can edit a read replica using the Neon Console or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) to change the [Compute size](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration) or [Scale to Zero](https://neon.com/docs/manage/computes#scale-to-zero-configuration) configuration. Tab: Console To edit a read replica compute using the Neon Console: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. Under **Computes**, identify the read replica compute you want to modify, and click **Edit**. 1. Make the changes to your compute settings, and click **Save**. Tab: API To edit a read replica compute with the Neon API, use the [Update endpoint](https://api-docs.neon.tech/reference/updateprojectendpoint) method. ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects//endpoints/ \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "autoscaling_limit_min_cu": 25, "autoscaling_limit_max_cu": 3, "suspend_timeout_seconds": 604800, "provisioner": "k8s-neonvm" } } ' ``` Computes are identified by their `project_id` and `endpoint_id`. For information about obtaining the required `project_id` and `endpoint_id` parameters, refer to [Update endpoint](https://api-docs.neon.tech/reference/updateprojectendpoint), in the _Neon API reference_. For information about obtaining an Neon API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). ## Delete a read replica You can delete a read replica using the Neon Console or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Deleting a read replica is a permanent action, but you can quickly create a new read replica if you need one. Tab: Console To delete a read replica using the Neon Console: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, find the read replica you want to delete. 1. Click **Edit** → **Delete**. Tab: API To delete a read replica compute with the Neon API, use the [Delete endpoint](https://api-docs.neon.tech/reference/deleteprojectendpoint) method. ```bash curl --request DELETE \ --url https://console.neon.tech/api/v2/projects//endpoints/ \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" ``` Computes are identified by their `project_id` and `endpoint_id`. For information about obtaining the required `project_id` and `endpoint_id` parameters, refer to [Delete endpoint](https://api-docs.neon.tech/reference/deleteprojectendpoint), in the _Neon API reference_. For information about obtaining an Neon API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). ## Monitoring read replicas You can monitor replication delay between the primary compute and your read replica computes from the **Monitoring** page in the Neon Console. Two graphs are provided: **Replication delay bytes** The **Replication delay bytes** graph shows the total size, in bytes, of the data that has been sent from the primary compute but has not yet been applied on the replica. A larger value indicates a higher backlog of data waiting to be replicated, which may suggest issues with replication throughput or resource availability on the replica. This graph is only visible when selecting a **Replica** compute from the **Compute** drop-down menu. **Replication delay seconds** The **Replication delay seconds** graph shows the time delay, in seconds, between the last transaction committed on the primary compute and the application of that transaction on the replica. A higher value suggests that the replica is behind the primary, potentially due to network latency, high replication load, or resource constraints on the replica. This graph is only visible when selecting a **Replica** compute from the **Compute** drop-down menu. ## Read replica compute setting synchronization For Neon [read replicas](https://neon.com/docs/introduction/read-replicas), certain Postgres settings should not have lower values than your primary read-write compute. For this reason, the following settings on read replica computes are synchronized with the settings on the primary read-write compute when the read replica compute is started: - `max_connections` - `max_prepared_transactions` - `max_locks_per_transaction` - `max_wal_senders` - `max_worker_processes` No users action is required. The settings are synchronized automatically when you create a read replica. However, if you change the compute size configuration on the primary read-write compute, you will need to restart your read replica computes to ensure that settings remain synchronized, as described in the next section. ### Replication delay issues If your read replicas are falling behind, follow these steps to diagnose and resolve the issue: 1. **Check your replication lag metrics** Refer to [Monitoring Read Replicas](https://neon.com/docs/guides/read-replica-guide#monitoring-read-replicas) for instructions on how to monitor replication lag. 2. **Verify configuration alignment** If replication lag is detected, ensure that the configurations for the primary and read-replica computes are aligned. Specifically, confirm that the following parameters match between your primary compute and read-replica compute: - `max_connections` - `max_prepared_transactions` - `max_locks_per_transaction` - `max_wal_senders` - `max_worker_processes` 3. **Restart read-replica computes if configurations are misaligned** If the configurations are not aligned, restart your read-replica computes to automatically update their settings. For instructions, see [Restart a Compute](https://neon.com/docs/manage/endpoints#restart-a-compute). **Tip**: When increasing the size of your primary read-write compute, always restart associated read replicas to ensure their configurations remain aligned. --- # Source: https://neon.com/llms/guides-read-replica-integrations.txt # Scale your application with Read Replicas > The document "Scale your application with Read Replicas" guides Neon users on integrating and utilizing read replicas to enhance application scalability and performance by distributing read queries across multiple database instances. ## Source - [Scale your application with Read Replicas HTML](https://neon.com/docs/guides/read-replica-integrations): The original HTML version of this documentation In Neon, a read replica is an independent read-only compute that performs read operations on the same data as your primary read-write compute, which means adding a read replica to a Neon project requires no additional storage. **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. A key benefit of read replicas is that you can distribute read requests to one or more read replicas, enabling you to easily scale your applications and achieve higher throughput for both read-write and read-only workloads. Many application frameworks offer built-in support for managing read replicas or multiple databases, making it easy to integrate Neon read replicas into an existing application. Below, we provide examples for popular frameworks and tools, but there are many others. Refer to your provider's documentation for specific details about integrating read replicas or multiple databases. ## Prisma In Prisma, the read replicas extension, `@prisma/extension-read-replicas`, adds support for read replicas to Prisma Client. You start by installing the extension: ```bash npm install @prisma/extension-read-replicas ``` You can then initialize the extension by extending your Prisma Client instance and providing a connection string that points to your read replica in the `url` option of the extension: ```javascript import { PrismaClient } from '@prisma/client'; import { readReplicas } from '@prisma/extension-read-replicas'; const prisma = new PrismaClient().$extends( readReplicas({ url: process.env.DATABASE_URL_REPLICA, }) ); // Query is run against the database replica await prisma.post.findMany(); // Query is run against the primary database await prisma.post.create({ data: { /** */ }, }); ``` All read operations, such as `findMany`, are executed against the read replica in the setup shown above. All write operations, such as create, update, and `$transaction` queries, are run against your primary compute. For more, including configuring multiple read replicas, refer to [Read Replicas](https://www.prisma.io/docs/orm/prisma-client/setup-and-configuration/read-replicas) in the Prisma documentation. **Example**: For a full example, see [Use Read Replicas with Prisma](https://neon.com/docs/guides/read-replica-prisma). ## Drizzle ORM With Drizzle ORM, you can leverage the `withReplicas()` function to direct `SELECT` queries to read replicas, and create, delete, and update operations to your primary compute, as shown in the following example: ```javascript import { sql } from 'drizzle-orm'; import { drizzle } from 'drizzle-orm/node-postgres'; import { boolean, jsonb, pgTable, serial, text, timestamp, withReplicas } from 'drizzle-orm/pg-core'; const usersTable = pgTable('users', { id: serial('id' as string).primaryKey(), name: text('name').notNull(), verified: boolean('verified').notNull().default(false), jsonb: jsonb('jsonb').$type(), createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), }); const primaryDb = drizzle("postgres://user:password@host:port/primary_db"); const read1 = drizzle("postgres://user:password@host:port/read_replica_1"); const read2 = drizzle("postgres://user:password@host:port/read_replica_2"); const db = withReplicas(primaryDb, [read1, read2]); ``` You can then use the `db` instance the same way you do already, and Drizzle will direct requests to read replicas and your primary compute automatically. ```sql // Read from either the read1 connection or the read2 connection await db.select().from(usersTable) // Use the primary compute for the delete operation await db.delete(usersTable).where(eq(usersTable.id, 1)) ``` For more, refer to [Read Replicas](https://orm.drizzle.team/docs/read-replicas) in the Drizzle documentation. **Example application**: For a full example, refer to this Neon community guide: [Scale your Next.js application with Drizzle ORM and Neon Postgres Read Replicas](https://neon.com/guides/read-replica-drizzle). ## Laravel To scale your Laravel application with Neon read replicas, you can configure Laravel's database settings and use Eloquent ORM to route read operations to replicas and write operations to your primary compute. For example, in your `config/database.php`, you can configure read and write connection settings and then route traffic accordingly. ```php 'pgsql' => [ 'driver' => 'pgsql', 'read' => [ 'host' => env('DB_READ_HOST'), ], 'write' => [ 'host' => env('DB_WRITE_HOST'), ], 'sticky' => true, 'port' => env('DB_PORT', '5432'), 'database' => env('DB_DATABASE', 'laravel'), 'username' => env('DB_USERNAME', 'root'), 'password' => env('DB_PASSWORD', ''), 'charset' => env('DB_CHARSET', 'utf8'), 'prefix' => '', 'prefix_indexes' => true, 'search_path' => 'public', 'sslmode' => 'prefer', ], ``` **Example application**: For a full setup, refer to this Neon community guide: [Scale your Laravel application with Neon Postgres Read Replicas](https://neon.com/guides/read-replica-laravel). ## Django In Django, you can use the `DATABASES` setting to tell Django about the primary and read replica databases you'll be using: ```python DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'your_database_name', 'USER': 'your_username', 'PASSWORD': 'your_password', 'HOST': 'your_primary_host', 'PORT': '5432', }, 'replica': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'your_database_name', 'USER': 'your_username', 'PASSWORD': 'your_password', 'HOST': 'your_read_replica_host', 'PORT': '5432', } } DATABASE_ROUTERS = ['notes.db_router.PrimaryReplicaRouter'] ``` You can then use the `PrimaryReplicaRouter` class to define routing logic for read and write database operations. ```python class PrimaryReplicaRouter: def db_for_read(self, model, **hints): return 'replica' def db_for_write(self, model, **hints): return 'default' def allow_relation(self, obj1, obj2, **hints): return True def allow_migrate(self, db, app_label, model_name=None, **hints): return True ``` For more, see [Multiple databases](https://docs.djangoproject.com/en/5.1/topics/db/multi-db/) in the Django documentation. **Example application**: For a complete setup, refer to this Neon community guide: [Scale your Django application with Neon Postgres Read Replicas](https://neon.com/guides/read-replica-django). ## Entity Framework Core To scale your .NET application with Neon read replicas, you can configure separate read and write contexts using Entity Framework Core's `DbContext` class. ```csharp using Microsoft.EntityFrameworkCore; using TodoApi.Models; namespace TodoApi.Data { public class TodoDbContext : DbContext { public TodoDbContext(DbContextOptions options) : base(options) { } public DbSet Todos => Set(); } public class TodoDbReadContext : DbContext { public TodoDbReadContext(DbContextOptions options) : base(options) { } public DbSet Todos => Set(); } } ``` **Example application**: For a complete setup, refer to this Neon community guide: [Scale your .NET application with Entity Framework and Neon Postgres Read Replicas](https://neon.com/guides/read-replica-entity-framework). --- # Source: https://neon.com/llms/guides-read-replica-prisma.txt # Use Neon read replicas with Prisma > The document explains how to configure and use Neon read replicas with Prisma, detailing the steps to set up read replicas for scaling read operations in a Neon database environment. ## Source - [Use Neon read replicas with Prisma HTML](https://neon.com/docs/guides/read-replica-prisma): The original HTML version of this documentation A Neon read replica is an independent read-only compute that performs read operations on the same data as your primary read-write compute, which means adding a read replica to a Neon project requires no additional storage. A key benefit of read replicas is that you can distribute read requests to one or more read replicas, enabling you to easily scale your applications and achieve higher throughput for both read-write and read-only workloads. For more information about Neon's read replica feature, see [Read replicas](https://neon.com/docs/introduction/read-replicas). In this guide, we'll show you how you can leverage Neon read replicas to efficiently scale Prisma applications using Prisma Client's read replica extension: [@prisma/extension-read-replicas](https://github.com/prisma/extension-read-replicas). ## Prerequisites - An application that uses Prisma with a Neon database. ## Create a read replica You can create read replicas for any branch in your Neon project. **Note**: The Free plan is limited to a maximum of 3 read replica computes per project. You can add a read replica by following these steps: 1. In the Neon Console, select **Branches**. 2. Select the branch where your database resides. 3. Click **Add Read Replica**. 4. On the **Add new compute** dialog, select **Read replica** as the **Compute type**. 5. Specify the **Compute size settings** options. You can configure a **Fixed Size** compute with a specific amount of vCPU and RAM (the default) or enable autoscaling by configuring a minimum and maximum compute size. You can also configure the **Scale to zero** setting, which controls whether your read replica compute is automatically suspended due to inactivity after 5 minutes. **Note**: The compute size configuration determines the processing power of your database. More vCPU and memory means more processing power but also higher compute costs. For information about compute costs, see [Billing metrics](https://neon.com/docs/introduction/billing). 6. When you finish making selections, click **Create**. Your read replica compute is provisioned and appears on the **Computes** tab of the **Branches** page. Alternatively, you can create read replicas using the [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint) or [Neon CLI](https://neon.com/docs/reference/cli-branches#create). Tab: API ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/late-bar-27572981/endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_only", "branch_id": "br-young-fire-15282225" } } ' | jq ``` Tab: CLI ```bash neon branches add-compute mybranch --type read_only ``` ## Retrieve the connection string for your read replica Connecting to a read replica is the same as connecting to any branch in a Neon project, except you connect via a read replica compute instead of your primary read-write compute. The following steps describe how to retrieve the connection string (the URL) for a read replica from the Neon Console. 1. Click the **Connect** button on your **Project Dashboard**. On the **Connect to your database modal**, select the branch, the database, and the role you want to connect with. 1. Under **Compute**, select a **Replica** compute. 1. Select the connection string and copy it. This is the information you need to connect to the read replica from your Prisma Client. The connection string appears similar to the following: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` If you expect a high number of connections, enable the **Connection pooling** toggle to add the `-pooler` flag to the connection string. ## Update your env file In your `.env` file, set a `DATABASE_REPLICA_URL` environment variable to the connection string of your read replica. Your `.env` file should look something like this, with your regular `DATABASE_URL` and the newly added `DATABASE_REPLICA_URL`. ```text DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" DATABASE_REPLICA_URL="postgresql://alex:AbC123dEf@ep-damp-cell-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` Notice that the `endpoint_id` (`ep-damp-cell-123456`) for the read replica compute differs. The read replica is a different compute and therefore has a different `endpoint_id`. ## Configure Prisma Client to use a read replica [@prisma/extension-read-replicas](https://github.com/prisma/extension-read-replicas) adds support to Prisma Client for read replicas. The following steps show you how to install the extension and configure it to use a Neon read replica. 1. Install the extension in your Prisma project: ```bash npm install @prisma/extension-read-replicas ``` 2. Extend your Prisma Client instance by importing the extension and adding the `DATABASE_REPLICA_URL` environment variable as shown: ```javascript import { PrismaClient } from '@prisma/client'; import { readReplicas } from '@prisma/extension-read-replicas'; const prisma = new PrismaClient().$extends( readReplicas({ url: DATABASE_REPLICA_URL, }) ); ``` **Note**: You can also pass an array of read replica connection strings if you want to use multiple read replicas. Neon supports adding multiple read replicas to a database branch. ```javascript // lib/prisma.ts const prisma = new PrismaClient().$extends( readReplicas({ url: [process.env.DATABASE_REPLICA_URL_1, process.env.DATABASE_REPLICA_URL_2], }) ); ``` When your application runs, read operations are sent to the read replica. If you specify multiple read replicas, a read replica is selected randomly. All write and `$transaction` queries are sent to the primary compute defined by `DATABASE_URL`, which is your read/write compute. If you want to read from the primary compute and bypass read replicas, you can use the `$primary()` method in your extended Prisma Client instance: ```bash const posts = await prisma.$primary().post.findMany() ``` This Prisma Client query will be routed to your primary database. ## Examples This example demonstrates how to use the [@prisma/extension-read-replicas](https://github.com/prisma/extension-read-replicas) extension in Prisma Client. It uses a simple TypeScript script to read and write data in a Postgres database. - [Prisma read replicas demo](https://github.com/prisma/read-replicas-demo): A TypeScript example showing how to use the @prisma/extension-read-replicas extension in Prisma Client --- # Source: https://neon.com/llms/guides-redwoodsdk.txt # Connect a RedwoodSDK application to Neon > The document outlines the steps to connect a RedwoodSDK application to a Neon database, detailing configuration settings and code examples necessary for establishing a successful connection. ## Source - [Connect a RedwoodSDK application to Neon HTML](https://neon.com/docs/guides/redwoodsdk): The original HTML version of this documentation [RedwoodSDK](https://rwsdk.com/) is a framework for building full-stack applications on Cloudflare. This guide describes how to create a Neon project and access it from a RedwoodSDK application. To create a Neon project and access it from a RedwoodSDK application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a RedwoodSDK project and add dependencies 1. Create a RedwoodSDK project if you do not have one. For instructions, see [RedwoodSDK Quickstart](https://docs.rwsdk.com/getting-started/quick-start/). 2. Navigate into your new project directory and install the RedwoodSDK dependencies: ```bash cd my-redwood-app npm install ``` 3. Add project dependencies depending on the PostgreSQL driver you wish to use (`postgres.js` or `@neondatabase/serverless`): Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find your Neon database connection string by clicking the **Connect** button on your **Project Dashboard** to open the **Connect to your database** modal. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client In your RedwoodSDK application (e.g., in `src/app/pages/Home.tsx`), import the driver and use it within your route handlers. Here's how you can set up a simple route to query the database: Tab: postgres.js ```typescript import { RequestInfo } from "rwsdk/worker"; import postgres from 'postgres'; import { env } from "cloudflare:workers"; async function getData() { const sql = postgres(env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return response[0].version; } export async function Home({ ctx }: RequestInfo) { const data = await getData(); return <>{data}; } ``` Tab: Neon serverless driver ```typescript import { RequestInfo } from "rwsdk/worker"; import { neon } from '@neondatabase/serverless'; import { env } from "cloudflare:workers"; async function getData() { const sql = neon(env.DATABASE_URL); const response = await sql`SELECT version()`; return response[0].version; } export async function Home({ ctx }: RequestInfo) { const data = await getData(); return <>{data}; } ``` ## Run your RedwoodSDK application Generate the required Wrangler types for RedwoodSDK to detect environment variables: ```bash npx wrangler types ``` Start the development server: ```bash npm run dev ``` Navigate to ([localhost:5173](http://localhost:5173)) in your browser. You should see a response similar to the following, indicating a successful connection to your Neon database: ```text PostgreSQL 17.5 (6bc9ef8) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14+deb12u1) 12.2.0, 64-bit ``` > The specific version may vary depending on the PostgreSQL version of your Neon project. ## Source code You can find a sample RedwoodSDK application configured for Neon on GitHub: - [Get started with RedwoodSDK and Neon](https://github.com/neondatabase/examples/tree/main/with-redwoodsdk) ## Resources - [RedwoodSDK Documentation](https://docs.rwsdk.com/) - [Connect to a PostgreSQL database with Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/postgres/) --- # Source: https://neon.com/llms/guides-reflex.txt # Build a Python App with Reflex and Neon > The document outlines the process of building a Python application using Reflex and Neon, detailing steps for setting up the environment, integrating with Neon's database, and deploying the app. ## Source - [Build a Python App with Reflex and Neon HTML](https://neon.com/docs/guides/reflex): The original HTML version of this documentation [Reflex](https://reflex.dev/) is a Python web framework that allows you to build full-stack applications with Python. Using Reflex, you can build frontend and backend applications using Python to manage the interaction between the frontend UI and the state with the server-side logic. To make the application data-driven, you can connect to a Neon Postgres database. To connect to Neon from a Reflex application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. To create a Neon project: 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Set up a Reflex project To set up a Reflex project, you need to install the Reflex CLI and create a new project. ### Create the project directory Create a new directory for your Reflex project and navigate to it: ```bash mkdir with_reflex cd with_reflex ``` ### Create a virtual environment It's recommended to use a virtual environment to manage your project dependencies. In this example, `venv` is used to create a virtual environment. You can use any other virtual environment manager of your choice like `poetry`, `pipenv`, or `uv`. To create a virtual environment, run the following command in your project directory: Tab: MacOS/Linux ```bash python3 -m venv .venv source .venv/bin/activate ``` Tab: Windows ```shell py -3 -m venv .venv .venv\Scripts\activate ``` ### Install the required packages Install Reflex, `python-dotenv` to manage environment variables, and `psycopg2-binary` to connect to Neon Postgres: ```bash pip install reflex python-dotenv psycopg2-binary ``` To initialize the Reflex app, run the following command: ```bash reflex init ``` When prompted choose: **A blank Reflex app** (option 1). You should see output similar to the following: ```bash $ reflex init ──────────────────────────────────────────── Initializing with_reflex ───────────────────────────────────────────── [07:20:37] Initializing the web directory. console.py:231 Get started with a template: (0) Try our free AI builder. (1) A blank Reflex app. (2) Premade templates built by the Reflex team. Which template would you like to use? (0): 1 [07:20:39] Initializing the app directory. console.py:231 Success: Initialized with_reflex using the blank template. ``` When the project is initialized, Reflex CLI creates a project directory. This directory will contain the following files and directories: ``` with_reflex ├── .web ├── assets ├── with_reflex │ ├── __init__.py │ └── with_reflex.py └── rxconfig.py ``` The `rxconfig.py` file contains the project configuration settings. This is where the database connection settings will be defined. ## Configure Reflex connection settings Now that you have set up a Reflex project, you can configure the connection settings to connect to Neon. ### Create a .env file Create a `.env` file in the root of your project directory to store your Neon connection string. Add the following line to the `.env` file, replacing the placeholder values with your actual Neon connection details: ```dotenv DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard** in the Neon Console. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ### Update the rxconfig.py file 1. Open the `rxconfig.py` file in the project directory. 2. Update the file to load the environment variables from the `.env` file and set the `db_url` parameter with the `DATABASE_URL` environment variable: ```python {1-2,5,13} import os from dotenv import load_dotenv import reflex as rx load_dotenv() config = rx.Config( app_name="with_reflex", plugins=[ rx.plugins.SitemapPlugin(), rx.plugins.TailwindV4Plugin(), ], db_url=os.environ.get("DATABASE_URL") ) ``` 3. Save the changes to the `rxconfig.py` file. Now, you can run the Reflex app and start building your Python full-stack application with Reflex and Neon. ## Creating a data model To create a data model in Reflex, you can define a Python class that represents the data structure. Reflex uses [sqlmodel](https://sqlmodel.tiangolo.com/) to provide a built-in ORM wrapping [SQLAlchemy](https://neon.com/docs/guides/sqlalchemy). Add the following code to `with_reflex/with_reflex.py` to create a `Customer` model: ```python {7-12} """Welcome to Reflex! This file outlines the steps to create a basic app.""" import reflex as rx from rxconfig import config class Customer(rx.Model, table=True): """The customer model.""" name: str email: str phone: str address: str class State(rx.State): """The app state.""" def index() -> rx.Component: # Welcome Page (Index) return rx.container( rx.color_mode.button(position="top-right"), rx.vstack( rx.heading("Welcome to Reflex!", size="9"), rx.text( "Get started by editing ", rx.code(f"{config.app_name}/{config.app_name}.py"), size="5", ), rx.link( rx.button("Check out our docs!"), href="https://reflex.dev/docs/getting-started/introduction/", is_external=True, ), spacing="5", justify="center", min_height="85vh", ), ) app = rx.App() app.add_page(index) ``` This code defines a `Customer` model with fields for `name`, `email`, `phone`, and `address`. The `table=True` argument tells Reflex to create a table in the database for this class. ### Generate the Alembic migration files Reflex uses [Alembic](https://alembic.sqlalchemy.org/en/latest/) to manage database migrations. To generate the Alembic migration files, run the following command in your project directory: ```bash reflex db init ``` ### Create and apply the migration Run the following command to create a new migration file that reflects the changes made to the data model: ```bash reflex db makemigrations --message 'create customer model' ``` After creating the migration file, apply the migration to the database by running: ```bash reflex db migrate ``` This command applies the migration to the database, updating the schema to match the model definition. You can verify that the `customer` table has been created in your Neon database by visiting the **Tables** section in the Neon Console. ## Create the Reflex app Update the `with_reflex/with_reflex.py` file to create a simple Customer Data App that allows you to add and view customer records. ```python """Welcome to Reflex! This file outlines the steps to create a basic app.""" import reflex as rx from rxconfig import config class Customer(rx.Model, table=True): """The customer model.""" name: str email: str phone: str address: str class State(rx.State): """The app state.""" # Form fields name: str = "" email: str = "" phone: str = "" address: str = "" # List of customers customers: list[Customer] = [] def load_customers(self): """Load all customers from the database.""" with rx.session() as session: self.customers = session.exec(Customer.select()).all() def add_customer(self): """Add a new customer to the database.""" if self.name and self.email: with rx.session() as session: customer = Customer( name=self.name, email=self.email, phone=self.phone, address=self.address, ) session.add(customer) session.commit() # Clear form fields self.name = "" self.email = "" self.phone = "" self.address = "" # Reload customers self.load_customers() def delete_customer(self, customer_id: int): """Delete a customer from the database.""" with rx.session() as session: customer = session.get(Customer, customer_id) if customer: session.delete(customer) session.commit() # Reload customers self.load_customers() def index() -> rx.Component: return rx.box( rx.color_mode.button(position="top-right"), rx.vstack( # Header rx.heading( "Customer Management", size="8", weight="bold", margin_bottom="2rem", ), # Add Customer Section rx.card( rx.vstack( rx.heading("➕ Add New Customer", size="5", weight="medium"), rx.grid( rx.input( placeholder="Name *", value=State.name, on_change=State.set_name, size="3", ), rx.input( placeholder="Email *", value=State.email, on_change=State.set_email, size="3", ), rx.input( placeholder="Phone", value=State.phone, on_change=State.set_phone, size="3", ), rx.input( placeholder="Address", value=State.address, on_change=State.set_address, size="3", ), columns="4", spacing="4", width="100%", ), rx.button( "Add Customer", on_click=State.add_customer, size="3", variant="solid", color_scheme="blue", width="auto", ), spacing="4", width="100%", ), size="3", width="100%", ), # Spreadsheet Table rx.card( rx.vstack( rx.heading( f"📊 Customer List ({State.customers.length()} total)", size="5", weight="medium", ), rx.box( rx.table.root( rx.table.header( rx.table.row( rx.table.column_header_cell("ID", width="80px"), rx.table.column_header_cell("Name"), rx.table.column_header_cell("Email"), rx.table.column_header_cell("Phone"), rx.table.column_header_cell("Address"), rx.table.column_header_cell( "Actions", width="120px" ), ), ), rx.table.body( rx.foreach( State.customers, lambda customer, index: rx.table.row( rx.table.cell( rx.badge( customer.id, color_scheme="gray", variant="soft", ), ), rx.table.cell( rx.text(customer.name, weight="medium"), ), rx.table.cell( rx.text(customer.email), ), rx.table.cell( rx.text(customer.phone), ), rx.table.cell( rx.text(customer.address), ), rx.table.cell( rx.button( rx.icon("trash-2", size=16), on_click=lambda: State.delete_customer( customer.id ), size="2", variant="soft", color_scheme="red", ), ), align="center", ), ), ), variant="surface", size="3", width="100%", ), width="100%", overflow_x="auto", ), spacing="4", width="100%", ), size="3", width="100%", ), spacing="6", width="100%", max_width="1400px", padding="2rem", ), width="100%", display="flex", justify_content="center", min_height="100vh", background="var(--gray-1)", on_mount=State.load_customers, ) app = rx.App() app.add_page(index) ``` The following features are included in this Customer Data App: - Add new customers with name, email, phone, and address. - View a list of all customers in a table format. - Delete customers from the list. ## Run the Reflex app To run the Reflex app, use the following command in your project directory: ```bash reflex run ``` This command starts the Reflex development server. You can access the app by navigating to `http://localhost:3000` in your web browser. You should see the Customer Data App interface, where you can add, view, and delete customer records stored in your Neon Postgres database. You can find the complete code for the Customer Data App mentioned in this guide on GitHub. - [Customer Data App](https://github.com/neondatabase/examples/tree/reflex/with_reflex): GitHub repository for the Reflex Customer Data App built with Neon Postgres --- # Source: https://neon.com/llms/guides-remix.txt # Connect a Remix application to Neon > This document guides users on integrating a Remix application with Neon by detailing the steps to configure the database connection and manage environment variables specific to Neon's infrastructure. ## Source - [Connect a Remix application to Neon HTML](https://neon.com/docs/guides/remix): The original HTML version of this documentation **Note**: Remix is now React Router v7. The features of the Remix framework have been merged into React Router v7. If you are starting a new project, we recommend using React Router. Follow our [React Router guide](https://neon.com/docs/guides/react-router) to connect to Neon. For more information, see the [Remix announcement](https://remix.run/blog/merging-remix-and-react-router). Remix is an open-source full stack JavaScript framework that lets you focus on building out the user interface using familiar web standards. This guide explains how to connect Remix with Neon using a secure server-side request. To create a Neon project and access it from a Remix application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Remix project and add dependencies 1. Create a Remix project if you do not have one. For instructions, see [Quick Start](https://remix.run/docs/en/main/start/quickstart), in the Remix documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client There are two parts to connecting a Remix application to Neon. The first is `db.server`. Remix will ensure any code added to this file won't be included in the client bundle. The second is the route where the connection to the database will be used. ### db.server Create a `db.server.ts` file at the root of your `/app` directory and add the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript import pg from 'pg'; const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL, ssl: true, }); export { pool }; ``` Tab: postgres.js ```javascript import postgres from 'postgres'; const sql = postgres(process.env.DATABASE_URL, { ssl: 'require' }); export { sql }; ``` Tab: Neon serverless driver ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); export { sql }; ``` ### route Create a new route in your `app/routes` directory and import the `db.server` file. Tab: node-postgres ```javascript import { pool } from '~/db.server'; import { json } from '@remix-run/node'; import { useLoaderData } from '@remix-run/react'; export const loader = async () => { const client = await pool.connect(); try { const response = await client.query('SELECT version()'); return response.rows[0].version; } finally { client.release(); } }; export default function Page() { const data = useLoaderData(); return <>{data}; } ``` Tab: postgres.js ```javascript import { sql } from '~/db.server'; import { json } from '@remix-run/node'; import { useLoaderData } from '@remix-run/react'; export const loader = async () => { const response = await sql`SELECT version()`; return response[0].version; }; export default function Page() { const data = useLoaderData(); return <>{data}; } ``` Tab: Neon serverless driver ```javascript import { sql } from '~/db.server'; import { json } from '@remix-run/node'; import { useLoaderData } from '@remix-run/react'; export const loader = async () => { const response = await sql`SELECT version()`; return response[0].version; }; export default function Page() { const data = useLoaderData(); return <>{data}; } ``` ## Run the app When you run `npm run dev` you can expect to see the following on [localhost:3000](localhost:3000): ```shell PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Remix and Neon](https://github.com/neondatabase/examples/tree/main/with-remix) --- # Source: https://neon.com/llms/guides-render.txt # Use Neon Postgres with Render > The document outlines the steps for integrating Neon Postgres with Render, detailing the configuration process to connect a Neon database to a Render application. ## Source - [Use Neon Postgres with Render HTML](https://neon.com/docs/guides/render): The original HTML version of this documentation [Render](https://render.com) is a comprehensive cloud service that provides hosting for web applications and static sites, with PR previews, zero-downtime deployments, and more. Render supports full-stack applications, offering both web services and background workers. This guide shows how to deploy a simple Node.js application connected to a Neon Postgres database on Render. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - A Render account. If you do not have one, sign up at [Render](https://render.com) to get started. - A GitHub account. Render integrates with public GitHub providers for continuous deployment. So, you'd need a GitHub account to upload your application code. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting up your Neon database ### Initialize a new project Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. - Click the `New Project` button to create a new project. - From your project dashboard, navigate to the `SQL Editor` from the sidebar, and run the following SQL command to create a new table in your database: ```sql CREATE TABLE books_to_read ( id SERIAL PRIMARY KEY, title TEXT, author TEXT ); ``` Next, we insert some sample data into the `books_to_read` table, so we can query it later: ```sql INSERT INTO books_to_read (title, author) VALUES ('The Way of Kings', 'Brandon Sanderson'), ('The Name of the Wind', 'Patrick Rothfuss'), ('Coders at Work', 'Peter Seibel'), ('1984', 'George Orwell'); ``` ### Retrieve your Neon database connection string You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. ## Implementing the Node.js application We'll create a simple Express application that connects to our Neon database and retrieve the sample data from the `books_to_read` table. Run the following commands in a terminal to set it up. ```bash mkdir neon-render-example && cd neon-render-example npm init -y && npm pkg set type="module" npm install express pg touch .env ``` We use the `npm pkg set type="module"` command to enable ES6 module support in our project. We also create a new `.env` file to store the `DATABASE_URL` environment variable, which we'll use to connect to our Neon database. Lastly, we install the `pg` library which is the Postgres driver we use to connect to our database. ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` Now, create a new file named `index.js` and add the following code: ```javascript import express from 'express'; import pkg from 'pg'; const app = express(); const port = process.env.PORT || 3000; // Parse JSON bodies for this app app.use(express.json()); // Create a new pool using your Neon database connection string const { Pool } = pkg; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); app.get('/', async (req, res) => { try { // Fetch books from your database using the postgres connection const { rows } = await pool.query('SELECT * FROM books_to_read;'); res.json(rows); } catch (error) { console.error('Failed to fetch books', error); res.status(500).json({ error: 'Internal Server Error' }); } }); // Start the server app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); ``` This code sets up an Express server that listens for requests on port 3000. When a request is made to the `URL`, the server queries the `books_to_read` table in your Neon database and returns the results as JSON. We can test this application locally by running: ```bash node --env-file=.env index.js ``` Now, navigate to `http://localhost:3000/` in your browser to check that it returns the sample data from the `books_to_read` table. ## Push Your application to GitHub To deploy your application to Render, you need to push your code to a GitHub repository. Create a new repository on GitHub by navigating to [GitHub - New Repo](https://github.com/new). You can then push your code to the new repository using the following commands: ```bash echo "node_modules/" > .gitignore && echo ".env" >> .gitignore echo "# neon-render-example" >> README.md git init && git add . && git commit -m "Initial commit" git branch -M main git remote add origin YOUR_GITHUB_REPO_URL git push -u origin main ``` You can visit the GitHub repository to verify that your code has been pushed successfully. ## Deploying to Render ### Create a New Web Service on Render Log in to your Render account and navigate to the dashboard. Click on the `New +` button and select "Web Service". Pick the option to `build and deploy` from a Git repository. Next, choose the GitHub repository hosting the Node.js application we created above. Configure your web service as follows: - **Environment**: Select "Node". - **Build Command**: Enter `npm install`. - **Start Command**: Enter `node index.js`. - **Environment Variables**: Add your Neon database connection string from earlier as an environment variable: - Name: `DATABASE_URL` - Value: `{NEON_DATABASE_CONNECTION_STRING}` Click "Create Web Service" to finish. Render will automatically deploy your application and redirect you to the service dashboard, showing the deployment progress and the logs. ### Verify Deployment Once the deployment completes, Render provides a public URL for accessing the web service. Visit the provided URL to verify that your application is running and can connect to your Neon database. Whenever you update your code and push it to your GitHub repository, Render will automatically build and deploy the changes to your web service. ## Removing Your Application and Neon Project To remove your application from Render, navigate to the dashboard, select `Settings` for the deployed application, and scroll down to find the "Delete Web Service" option. To delete your Neon project, follow the steps outlined in the Neon documentation under [Delete a project](https://neon.com/docs/manage/projects#delete-a-project). ## Source code You can find the source code for the application described in this guide on GitHub. - [Use Neon Postgres with Render](https://github.com/neondatabase/examples/tree/main/deploy-with-render): Connect a Neon Postgres database to your Node application deployed with Render ## Resources - [Render platform](https://render.com/) - [Neon](https://neon.tech) --- # Source: https://neon.com/llms/guides-reset-from-parent.txt # Reset from parent > The "Reset from parent" documentation guides Neon users on how to reset a branch to its parent state, detailing the steps and commands necessary to revert changes and restore the branch's original data state. ## Source - [Reset from parent HTML](https://neon.com/docs/guides/reset-from-parent): The original HTML version of this documentation Neon's **Reset from parent** feature lets you instantly reset all databases on a branch to the latest schema and data from its parent branch, helping you recover from issues, start on new feature development, or keep the different branches in your environment in sync. ## Example scenario When working with database branches, you might find yourself in a situation where you need to update your working branch to the latest data from your production branch. For example, let's say you have two child branches `staging` and `development` forked from your `production` branch. You have been working on the `development` branch and find it is now too far out of date with `production`. You have no schema changes in `development` to consider or preserve; you just want a quick refresh of the data. With the **Reset from parent** feature, you can perform a clean, instant reset to the latest data from the parent in a single operation, saving you the complication of manually creating and restoring branches. ## How Reset from parent works When you reset a branch to its parent, the data and schema is completely replaced with the latest data and schema from its parent. ### Key points - You can only reset a branch to the latest data from its parent. Point-in-time resets based on timestamp or LSN are possible using [Instant restore](https://neon.com/docs/introduction/branch-restore), a similar feature, with some differences: instant restore leaves a backup branch and is in general is intended more for data recovery than development workflow. - This reset is a complete overwrite, not a refresh or a merge. Any local changes made to the child branch are lost during this reset. - Existing connections will be temporarily interrupted during the reset. However, your connection details _do not change_. All connections are re-established as soon as the reset is done. - Root branches (like your project's `production` branch or schema-only branches) cannot be reset because they have no parent branch to reset to. ### Branch expiration behavior When you reset a branch that has an expiration set, the expiration timer restarts from the reset time using the original duration. For example, if your branch was originally set to expire in 24 hours, resetting gives it another full 24 hours from the reset time. This process recalculates the new `expires_at` value using the preserved `ttl_interval_seconds`, but the TTL interval itself remains unchanged. For more details about branch expiration, see [branch expiration](https://neon.com/docs/guides/branch-expiration). ## How to Reset from parent You can reset any branch to its parent using any of our tools. Tab: Console On the **Branches** page in the Neon Console, select the branch that you want to reset. The console opens to the details page for your branch, giving you key information about the branch and its child status: its parent, the last time it was reset, and other relevent detail. To reset the branch, select **Reset from parent** from the **Actions** menu or the **Last data reset** panel. **Note**: If this branch has children of its own, resetting is blocked. The resulting error dialog lets you delete these child branches, after which you can continue with the reset. Tab: CLI Using the CLI, you can reset a branch from parent using the following command: ```bash neon branches reset --parent ``` In the `id|name` field, specify the branch ID or name of the child branch whose data you want to reset. The `--parent` parameter is a boolean specifying the kind of reset action that Neon will perform. If you have multiple projects in your account, you'll also have to include the `project-id` in the command along with the branch. ```bash neon branches reset --parent --project-id ``` Example: ```bash neon branches reset development --parent --project-id noisy-pond-12345678 ``` Alternatively, you can set the `project-id` as a background context for your CLI session, letting you perform other actions against that project without having to include the `project-id` in every command. The setting is saved in a `context-file` and remains in place until you set a new context, or you remove the `context-file`. ```bash neon set-context --project-id ``` Read more about performing branching actions from the CLI in [CLI - branches](https://neon.com/docs/reference/cli-branches), and more about setting contexts in [CLI - set-context](https://neon.com/docs/reference/cli-set-context). Tab: API To reset a branch to its parent using the API, use the [Restore branch](https://api-docs.neon.tech/reference/restoreprojectbranch) endpoint, specifying the parent branch ID as the `source_branch_id`: ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/{NEON_PROJECT_ID}/branches/{BRANCH_ID}/restore \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "source_branch_id": "br-autumn-tree-a4a9k5g8" } ' ``` For details, see [Instant restore using the API](https://neon.com/docs/guides/branch-restore#how-to-use-branch-restore) ## Integrating branch resets in CI/CD workflows You can include resetting database branches as part of your CI/CD workflow. For example, when **starting a new feature** or **refreshing staging**. ### For new features Start feature development with a clean slate by resetting your development branch to align with staging or production (whichever is its parent). This replaces the branch's current state with the parent's latest data and schema. Use the command: ```bash neon branches reset dev-branch --parent ``` This strategy preserves a stable connection string for your development environment, while still ensuring every new feature begins with a fully updated and consistent environment. ### Refresh staging Reset **staging** to match its parent branch (i.e., **production**) for a reliable testing baseline. Automate staging updates with: ```bash neon branches reset staging --parent ``` This ensures staging accurately reflects the current production state for reliable testing. --- # Source: https://neon.com/llms/guides-rls-drizzle.txt # Simplify RLS with Drizzle > The document "Simplify RLS with Drizzle" guides Neon users on implementing Row-Level Security (RLS) using the Drizzle ORM, detailing setup and configuration processes specific to Neon's database environment. ## Source - [Simplify RLS with Drizzle HTML](https://neon.com/docs/guides/rls-drizzle): The original HTML version of this documentation What you'll learn: - How to simplify Row-Level Security using `crudPolicy` - Common RLS patterns with Drizzle - How to use custom Postgres roles with your policies - How to use Drizzle RLS with the Data API - How to use Drizzle RLS with the serverless driver Related docs: - [Row-Level Security with Neon](https://neon.com/docs/guides/row-level-security) - [Data API](https://neon.com/docs/data-api/get-started) - [RLS in Drizzle](https://orm.drizzle.team/docs/rls) Row-Level Security (RLS) is an important last line of defense for protecting your data at the database level. It ensures that users can only access the data they are permitted to see. However, implementing RLS requires writing and maintaining separate SQL policies for each CRUD operation (Create, Read, Update, Delete), which can be both tedious and error-prone. Drizzle ORM provides a declarative way to manage these policies directly within your database schema, making your security rules easier to write, review, and maintain. ## Understanding Neon's auth functions The code samples on this page use the `auth.user_id()` function provided by the [Data API](https://neon.com/docs/data-api/get-started). This function automatically extracts user information from JWT claims and makes it available in your RLS policies: ```typescript // In your RLS policy using: sql`(select auth.user_id() = ${table.userId})`, ``` When exposing your database directly to clients (such as through the Data API), RLS policies are essential to keep your data secure. We recommend using Drizzle to **declare your RLS policies** because they're easier to maintain than raw SQL. Once you define policies in your Drizzle schema and run migrations, they're created in your Postgres database and enforced for all queries. ## Example schema Below is a sample schema for a basic todo application. This example demonstrates how you would define the table structure and manually create Row-Level Security (RLS) policies for each CRUD operation using plain SQL. ```sql CREATE TABLE IF NOT EXISTS "todos" ( "id" bigint PRIMARY KEY, "user_id" text DEFAULT (auth.user_id()) NOT NULL, "task" text NOT NULL, "is_complete" boolean DEFAULT false NOT NULL ); -- This boilerplate SQL code is required for every table you want to secure ALTER TABLE "todos" ENABLE ROW LEVEL SECURITY; CREATE POLICY "create todos" ON "todos" FOR INSERT TO "authenticated" WITH CHECK ((select auth.user_id()) = user_id); CREATE POLICY "view todos" ON "todos" FOR SELECT TO "authenticated" USING ((select auth.user_id()) = user_id); CREATE POLICY "update todos" ON "todos" FOR UPDATE TO "authenticated" USING ((select auth.user_id()) = user_id) WITH CHECK ((select auth.user_id()) = user_id); CREATE POLICY "delete todos" ON "todos" FOR DELETE TO "authenticated" USING ((select auth.user_id()) = user_id); ``` These SQL policies guarantee that authenticated users can only create, view, update, or delete todo items they own, that is when `auth.user_id()` matches the `user_id` column for a given row. This enforces strict, per-user access control at the database level. In these RLS policies, the `USING` clause defines the condition under which a row is accessible (readable) by a user, while the `WITH CHECK` clause enforces the condition required for inserting or updating a row. Together, these clauses provide precise, row-level access control to your data. While this approach is secure and explicit, it can quickly become repetitive and hard to maintain as your application grows and you introduce more tables or roles. Drizzle's declarative `crudPolicy` and `pgPolicy` helpers eliminate this boilerplate, letting you define and manage your security logic directly in your Drizzle schema for better maintainability. ## Simplifying RLS with crudPolicy Drizzle provides a convenient `crudPolicy` helper to simplify the creation of RLS policies. With `crudPolicy`, you can achieve the same result declaratively. For example: ```typescript {17-21} import { pgTable, text, bigint, boolean } from 'drizzle-orm/pg-core'; import { crudPolicy, authenticatedRole, authUid } from 'drizzle-orm/neon'; import { sql } from 'drizzle-orm'; export const todos = pgTable( 'todos', { id: bigint('id', { mode: 'number' }).primaryKey(), userId: text('user_id') .notNull() .default(sql`(auth.user_id())`), task: text('task').notNull(), isComplete: boolean('is_complete').notNull().default(false), }, (table) => [ // Apply RLS policies for the 'authenticated' role crudPolicy({ role: authenticatedRole, read: authUid(table.userId), // Users can only read their own todos modify: authUid(table.userId), // Users can only create, update, or delete their own todos }), ] ); ``` **Note**: **About Drizzle's role:** Drizzle is used here to **declare your RLS policies** in TypeScript. When you run migrations, these policies are created in your Postgres database. After that, the policies are enforced regardless of how you query your data—via the Data API, the serverless driver, or any other connection method. ### Configuration parameters The `crudPolicy` function from `drizzle-orm/neon` is a high-level helper that declaratively generates Row-Level Security (RLS) policies for your tables. It accepts the following parameters: - **`role`**: The Postgres role or array of roles the policy applies to. Neon provides `authenticatedRole` and `anonymousRole` out of the box, but you can also use custom roles. - **`read`**: Controls access to `SELECT` operations. Accepts: - `true` to allow all reads for the role - `false` to deny all reads - a custom SQL expression for fine-grained access (e.g., `authUid(table.userId)`) - `null` to skip generating a `SELECT` policy - **`modify`**: Controls access to `INSERT`, `UPDATE`, and `DELETE` operations. Accepts: - `true` to allow all modifications - `false` to deny all modifications - a custom SQL expression for conditional access (e.g., `authUid(table.userId)`) - `null` to skip generating policies for these operations The `crudPolicy` helper generates an array of RLS policy definitions for all CRUD operations (select, insert, update, delete) based on these parameters. For most use cases, this lets you express common access patterns with minimal boilerplate. > The `authUid(column)` helper generates the SQL condition `(select auth.user_id() = column)`, which is used to restrict access to rows owned by the current user for use in `read` and `modify` policies. ### Advanced usage: Finer-grained control with `pgPolicy` While `crudPolicy` is ideal for scenarios where a role has the same permissions for reading and modifying data, there are cases where you need more granular control. For these situations, you can use Drizzle's `pgPolicy` function, which provides the flexibility to define custom policies for each operation. Using `pgPolicy` is ideal when you need to: - Define different logic for `INSERT` vs. `UPDATE` operations. - Create a policy for a single command, like `DELETE` only. - Implement complex conditions where the `USING` and `WITH CHECK` clauses differ significantly. For example, you might want to allow only users with an `admin` role to update or delete rows in a table, while regular users can insert new rows and view only their own data. This kind of scenario where different roles have different permissions for each operation is easy to express using `pgPolicy`, giving you fine-grained control over who can perform which actions on your data. #### Replicating `crudPolicy` with `pgPolicy` To understand how `pgPolicy` works, let's rewrite the `todos` example using it. The following four `pgPolicy` definitions are exactly what `crudPolicy` would generate from your simpler configuration. ```typescript {18-22,25-29,32-37,40-44} import { pgTable, text, bigint, boolean, pgPolicy } from 'drizzle-orm/pg-core'; import { authenticatedRole, authUid } from 'drizzle-orm/neon'; import { sql } from 'drizzle-orm'; export const todos = pgTable( 'todos', { id: bigint('id', { mode: 'number' }).primaryKey(), userId: text('user_id') .notNull() .default(sql`(auth.user_id())`), task: text('task').notNull(), isComplete: boolean('is_complete').notNull().default(false), }, (table) => { return [ // Policy for viewing (SELECT) todos pgPolicy('view todos', { for: 'select', to: authenticatedRole, using: authUid(table.userId), // users can only read their own todos }), // Policy for creating (INSERT) todos pgPolicy('create todos', { for: 'insert', to: authenticatedRole, withCheck: authUid(table.userId), // users can only create their own todos }), // Policy for updating (UPDATE) todos pgPolicy('update todos', { for: 'update', to: authenticatedRole, using: authUid(table.userId), // users can only update their own todos withCheck: authUid(table.userId), // users can only update their own todos }), // Policy for deleting (DELETE) todos pgPolicy('delete todos', { for: 'delete', to: authenticatedRole, using: authUid(table.userId), // users can only delete their own todos }), ]; } ); ``` You can apply this approach to additional tables and operations, allowing you to define increasingly sophisticated and tailored security policies as your application's requirements evolve. #### Example: Time limited updates Here is how you can implement a rule that `crudPolicy` can't handle alone: **A user can update their todo, but only within 24 hours of creating it.** They should still be able to view and delete it anytime. This requires a different `WITH CHECK` condition for `UPDATE` than the `USING` condition. ```typescript {17,20,24-28,31-35,38-42,45-50} import { pgTable, text, bigint, timestamp, pgPolicy, boolean } from 'drizzle-orm/pg-core'; import { authenticatedRole } from 'drizzle-orm/neon'; import { sql } from 'drizzle-orm'; export const todos = pgTable( 'todos', { id: bigint('id', { mode: 'number' }).primaryKey(), userId: text('user_id') .notNull() .default(sql`(auth.user_id())`), task: text('task').notNull(), isComplete: boolean('is_complete').notNull().default(false), createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(), }, (table) => { const userOwnsTodo = sql`(select auth.user_id() = ${table.userId})`; // Condition for updates: user must own the todo AND it must be less than 24 hours old. const canUpdateTodo = sql`(${userOwnsTodo} and ${table.createdAt} > now() - interval '24 hours')`; return [ // View policy remains the same. pgPolicy('view todos', { for: 'select', to: authenticatedRole, using: userOwnsTodo, }), // Insert policy also remains the same. pgPolicy('create todos', { for: 'insert', to: authenticatedRole, withCheck: userOwnsTodo, }), // Delete policy remains the same. pgPolicy('delete todos', { for: 'delete', to: authenticatedRole, using: userOwnsTodo, }), // The update policy now has a different, stricter WITH CHECK condition. pgPolicy('update todos (time-limited)', { for: 'update', to: authenticatedRole, using: userOwnsTodo, // User must own the row to even attempt an update. withCheck: canUpdateTodo, // The updated row must satisfy this stricter condition. }), ]; } ); ``` This example demonstrates how `pgPolicy` gives you precise, command-level control over your security rules, making it easy to implement complex business logic directly in your database schema. ### Securing database views with RLS Row-Level Security (RLS) can also be enabled on Postgres views, allowing you to control access to view data at the row level. For details on how to enable RLS on views and apply policies using Drizzle, refer to the [Drizzle documentation](https://orm.drizzle.team/docs/rls#rls-on-views). This approach makes it possible to expose curated or joined subsets of your data while ensuring users only see the rows they are authorized to access. ## Common RLS patterns Using `crudPolicy` and `pgPolicy`, you can implement a variety of security models. Here are some of the most common ones: ### User-Owned Data This is the most common RLS pattern, where each user can access only the records they own. It's ideal for applications such as personal to-do lists, user profile settings, or any scenario where users should have full control over their own data and no visibility into others' information. As demonstrated in the todos example above, this approach ensures strict data isolation and privacy. A typical `crudPolicy` and a `pgPolicy` for this scenario might look like: Tab: Drizzle (crudPolicy) ```typescript [ crudPolicy({ role: authenticatedRole, read: authUid(table.userId), modify: authUid(table.userId), }), ]; ``` Tab: Drizzle (pgPolicy) ```typescript [ pgPolicy('view todos', { for: 'select', to: authenticatedRole, using: authUid(table.userId), }), pgPolicy('create todos', { for: 'insert', to: authenticatedRole, withCheck: authUid(table.userId), }), pgPolicy('update todos', { for: 'update', to: authenticatedRole, using: authUid(table.userId), withCheck: authUid(table.userId), }), pgPolicy('delete todos', { for: 'delete', to: authenticatedRole, using: authUid(table.userId), }), ]; ``` ### Role-based access control Assign different permissions to anonymous users and authenticated users. For example, in a blog application, anyone can read posts, but only authenticated users can modify their own content. This setup uses separate policies for the `anonymousRole` (public read access) and the `authenticatedRole` (user-specific modifications), making it ideal for applications that distinguish between public and logged-in user actions. A typical Drizzle schema with `crudPolicy` and `pgPolicy` for this scenario might look like: Tab: Drizzle (crudPolicy) ``` ```typescript {17-37,40-60} import { sql } from 'drizzle-orm'; import { authenticatedRole, authUid, anonymousRole } from 'drizzle-orm/neon'; import { bigint, boolean, pgPolicy, pgTable, text } from 'drizzle-orm/pg-core'; export const posts = pgTable( 'posts', { id: bigint({ mode: 'number' }).primaryKey(), userId: text() .notNull() .default(sql`(auth.user_id())`), content: text().notNull(), published: boolean().notNull().default(false), }, (table) => [ // Anonymous users pgPolicy('Allow anonymous users to read any post', { to: anonymousRole, for: 'select', using: sql`true`, }), pgPolicy('Deny anonymous users from inserting posts', { to: anonymousRole, for: 'insert', withCheck: sql`false`, }), pgPolicy('Deny anonymous users from updating posts', { to: anonymousRole, for: 'update', withCheck: sql`false`, using: sql`false`, }), pgPolicy('Deny anonymous users from deleting posts', { to: anonymousRole, for: 'delete', using: sql`false`, }), // Authenticated users pgPolicy('Allow authenticated users to read any post', { to: authenticatedRole, for: 'select', using: sql`true`, }), pgPolicy('Allow authenticated users to insert their own posts', { to: authenticatedRole, for: 'insert', withCheck: authUid(table.userId), }), pgPolicy('Allow authenticated users to update their own posts', { to: authenticatedRole, for: 'update', using: authUid(table.userId), withCheck: authUid(table.userId), }), pgPolicy('Allow authenticated users to delete their own posts', { to: authenticatedRole, for: 'delete', using: authUid(table.userId), }), ] ); ``` ### Complex relationships & shared data Secure data based on relationships in other tables, such as allowing access to a shared document only if the user is part of a specific group or project. This often involves more complex SQL queries and may require additional metadata to be stored alongside your main data. This is where Drizzle really helps: expressing these relationship based policies declaratively in your schema is much less error-prone and far easier to maintain than writing raw SQL policies by hand. For example, suppose you have a `notes` table and a `paragraphs` table that contains the text of each note. You want to ensure that users can only access paragraphs from notes they own or that are shared with them. ```typescript {17-21,23-27,40-44,46-50} shouldWrap import { sql } from 'drizzle-orm'; import { crudPolicy, authenticatedRole, authUid } from 'drizzle-orm/neon'; import { boolean, pgPolicy, pgTable, text, uuid } from 'drizzle-orm/pg-core'; export const notes = pgTable( 'notes', { id: uuid('id').defaultRandom().primaryKey(), ownerId: text('owner_id') .notNull() .default(sql`auth.user_id()`), title: text('title').notNull().default('untitled note'), shared: boolean('shared').default(false), }, (table) => [ // Users can only access their own notes crudPolicy({ role: authenticatedRole, read: authUid(table.ownerId), modify: authUid(table.ownerId), }), // Shared notes are visible to authenticated users pgPolicy('shared_policy', { for: 'select', to: authenticatedRole, using: sql`${table.shared} = true`, }), ] ); export const paragraphs = pgTable( 'paragraphs', { id: uuid('id').defaultRandom().primaryKey(), noteId: uuid('note_id').references(() => notes.id), content: text('content').notNull(), }, (table) => [ // Users can only access paragraphs from their own notes crudPolicy({ role: authenticatedRole, read: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, modify: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, }), // Shared note paragraphs are visible to authenticated users pgPolicy('shared_policy', { for: 'select', to: authenticatedRole, using: sql`(select notes.shared from notes where notes.id = ${table.noteId})`, }), ] ); ``` In this example: - Users can only access paragraphs from notes they own or that are shared with them. - Shared paragraphs are visible to authenticated users. This pattern can be adapted for other relationship-based access controls, such as project teams, organization memberships, or shared resources. ### Using Custom Roles with Drizzle RLS Custom roles are essential when your application requires more nuanced access control than what default roles (like `authenticated` or `anonymous`) provide. By defining custom roles, you can assign specific permissions to different user groups, such as moderators, editors, or admins, tailoring access to fit your business logic and security needs. For example, in a blog application, you might define an `editor` role that can update or delete any post, while regular users can only modify their own posts. This approach lets you implement granular access control by assigning permissions based on each role's responsibilities. Here's how you can define custom roles and apply policies in Drizzle: ```typescript {5,19-23,25-29} import { sql } from 'drizzle-orm'; import { authenticatedRole, authUid, crudPolicy } from 'drizzle-orm/neon'; import { bigint, boolean, pgRole, pgTable, text } from 'drizzle-orm/pg-core'; export const editorRole = pgRole('editor'); export const posts = pgTable( 'posts', { id: bigint({ mode: 'number' }).primaryKey(), userId: text() .notNull() .default(sql`(auth.user_id())`), content: text().notNull(), published: boolean().notNull().default(false), }, (table) => [ // Editors: full access crudPolicy({ role: editorRole, read: true, // Editors can read all posts modify: true, // Editors can modify all posts }), // Authenticated users (authors): can only modify their own posts crudPolicy({ role: authenticatedRole, read: true, // Can read all posts modify: authUid(table.userId), // Can only modify their own posts }), ] ); ``` **Important**: It's important to note that while Drizzle RLS policies define row-level access, you must also grant the necessary table privileges to the `editor` role directly in Postgres. Drizzle does not manage these privileges for you. Make sure to follow the instructions in [Granting Permissions to Postgres Roles](https://neon.com/docs/guides/rls-drizzle#granting-permissions-to-postgres-roles) to ensure the `editor` role has the required access. This approach lets you easily combine multiple roles with different permissions in your schema, keeping your access logic clear and maintainable. ## Executing authenticated queries After defining RLS policies in your Drizzle schema and running migrations, you need to execute queries with proper authentication. ### Using the Data API If you're building a frontend application, the [Data API](https://neon.com/docs/data-api/get-started) provides a REST API for querying your database. In this case, Drizzle is used only to **declare your RLS policies**; you won't use Drizzle's query builder for executing queries. Instead, you'll use a PostgREST-compatible client like `postgrest-js`. Your RLS policies (defined with Drizzle) automatically enforce security at the database level when queries come through the Data API. For complete examples of using Drizzle RLS with the Data API, see: - [Data API tutorial](https://neon.com/docs/data-api/demo) - Full note-taking app example - [Data API getting started](https://neon.com/docs/data-api/get-started) - Setup and basic queries ### Using Drizzle with the serverless driver For backend APIs where you want to use Drizzle's query builder with RLS, you can use the `drizzle-orm/neon-serverless` driver with JWT verification in transactions. **Note**: The RLS policies in this example use `auth.user_id()`, which requires the Data API to be enabled. This is a hybrid approach: frontend queries use the Data API while backend operations use the serverless driver, both enforcing the same RLS policies. ```typescript import { drizzle } from 'drizzle-orm/neon-serverless'; import { Pool } from '@neondatabase/serverless'; import { todos } from './schema'; // Your Drizzle schema import { sql } from 'drizzle-orm'; // Example JWT verification (implement based on your auth provider) async function verifyJWT(token: string, jwksUrl: string) { // Your verification logic here // This should return the decoded payload return { payload: { sub: 'user-id', email: 'user@example.com' } }; } async function getTodosForUser(jwtToken: string) { const pool = new Pool({ connectionString: process.env.DATABASE_URL! }); const db = drizzle(pool); try { // Verify JWT const { payload } = await verifyJWT(jwtToken, process.env.JWKS_URL!); const claims = JSON.stringify(payload); // Use Drizzle transaction to set auth and query const result = await db.transaction(async (tx) => { // Set JWT claims in the session await tx.execute(sql`SELECT set_config('request.jwt.claims', ${claims}, true)`); // Now execute your Drizzle query - RLS policies will enforce access return await tx.select().from(todos); }); return result; } finally { await pool.end(); } } ``` **Pattern breakdown:** 1. **Verify the JWT** using your authentication provider's method 2. **Set the claims** in the database session using `set_config()` within a transaction 3. **Execute Drizzle queries** in the same transaction - RLS policies use `auth.user_id()` to enforce access **Important**: When using this pattern, ensure your database connection string uses a role that does **not** have the `BYPASSRLS` attribute. Avoid using the `neondb_owner` role, as it bypasses Row-Level Security policies. ## Example applications To see these concepts in action, check out these sample applications: - **[Data API Demo](https://github.com/neondatabase-labs/neon-data-api-neon-auth)**: A note-taking app demonstrating `crudPolicy` with Neon's Data API. --- # Source: https://neon.com/llms/guides-rls-tutorial.txt # Secure your app with RLS > The document "Secure your app with RLS" guides Neon users on implementing Row-Level Security (RLS) to enhance data access control within their applications, detailing steps for configuring RLS policies in a Neon database environment. ## Source - [Secure your app with RLS HTML](https://neon.com/docs/guides/rls-tutorial): The original HTML version of this documentation Sample project: - [Neon Data API + Neon Auth](https://github.com/neondatabase-labs/neon-data-api-neon-auth) Related docs: - [Row-Level security in Drizzle](https://orm.drizzle.team/docs/rls) In this tutorial, you'll clone and modify up a sample React.js note-taking app to demonstrate how Postgres Row-Level Security (RLS) provides an additional security layer beyond application logic. The app integrates with a Neon database via the Neon Data API. For authentication, **Neon Auth** issues a unique `userId` in a JSON Web Token (JWT) for each user. This `userId` is passed to Postgres, where RLS policies enforce access control directly at the database level. This setup ensures each user can only interact with their own **notes**, even if application-side logic fails. While this example uses Neon Auth, any JWT-issuing provider like Auth0 or Clerk can be used. ## Prerequisites To get started, you'll need: - **Neon account**: Sign up at [Neon](https://neon.tech) and create your first project in **AWS** (note: [Azure](https://neon.com/docs/guides/neon-rls#current-limitations) regions are not currently supported). - **Neon Data API + Neon Auth example application**: Clone the sample [Neon Data API + Neon Auth repository](https://github.com/neondatabase-labs/neon-data-api-neon-auth): ```bash git clone https://github.com/neondatabase-labs/neon-data-api-neon-auth.git ``` Follow the instructions in the README to set up Neon Data API with Neon Auth, configure environment variables, and run database migrations. > When enabling Neon Data API, ensure you select **Neon Auth** with Neon Data API. ## Create test users Start the sample application: ```bash npm run dev ``` Open the app in your browser using [`localhost:5173`](http://localhost:5173). Now, let's create the two users we'll use to show how RLS policies can prevent data leaks between users, and what can go wrong if you don't. The sample app supports Google and Github logins, so let's create one of each. For this guide, we'll call our two users Alice and Bob. Create your `Alice` user using Google. Then, using a private browser session, create your `Bob` user account using Github or other Google account. Side by side, here's the empty state for both users: When each user creates a note, it's securely linked to their `ownerId` in the database schema. Here's the structure of the `notes` table: ```typescript { id: uuid("id").defaultRandom().primaryKey(), ownerId: text("owner_id") .notNull() .default(sql`auth.user_id()`), title: text("title").notNull().default("untitled note"), createdAt: timestamp("created_at", { withTimezone: true }).defaultNow(), updatedAt: timestamp("updated_at", { withTimezone: true }).defaultNow(), shared: boolean("shared").default(false), } ``` The `ownerId` column is populated directly from the authenticated `(auth.user_id())` in the JWT, ensuring that each note is tied to the correct user. ## Create notes Let's create some sample notes for both Alice and Bob. The paragraphs in the notes are: > The notes act as a top level container for paragraphs. Each paragraph is stored in `paragraphs` table, linked to the parent note by `noteId`. ### Notes are isolated In this sample app, isolation of Notes to each user is handled both in the application logic and using Row-level Security (RLS) policies defined in our application's schema file. Let's take a look at the `useNotes` function in the `src/routes/index.tsx` file: ```typescript function useNotes() { const postgrest = usePostgrest(); const user = useUser({ or: 'redirect' }); return useQuery({ queryKey: ['notes'], queryFn: async (): Promise> => { // `eq` filter is optional because of RLS. But we send it anyway for // performance reasons. const { data, error } = await postgrest .from('notes') .select('id, title, created_at, owner_id, shared') .eq('owner_id', user.id) .order('created_at', { ascending: false }); if (error) { throw error; } return data; }, }); } ``` The `eq` clause is technically enough to make sure data is properly isolated. Neon gets `user.id` from the Neon Auth JWT and matches that to the `owner_id` column in the `notes` tables, so each user can only see their own notes. Even though isolation is backed by our RLS policies, we include it here for performance reasons: it helps Postgres build a better query plan and use indexes where possible. ### RLS policy for viewing notes In the application's `schema.ts` file, you can find the RLS policies written in Drizzle that provide access control at the database level. Here is a look at one of those policies: ```typescript crudPolicy({ role: authenticatedRole, read: authUid(table.ownerId), modify: authUid(table.ownerId), }); ``` `authUid` is a helper function that evaluates to ``` sql`(select auth.user_id() = owner_id)` ``` which is a SQL expression that checks if the `owner_id` of a row matches the `auth.user_id()` from the JWT. This policy ensures that read (`SELECT`) queries only returns rows where the `owner_id` matches the `auth.user_id()` derived from the authenticated user's JWT. This means that users can only access their own notes. By enforcing this rule at the database level, the RLS policy provides an extra layer of security beyond the application layer. ## Remove access control from application code Now, let's test what happens when we remove access control from the application layer to rely solely on RLS at the database level. In the `src/routes/index.tsx` file, modify the `useNotes` function to remove the `eq` clause that filters notes by `owner_id`: ```typescript function useNotes() { const postgrest = usePostgrest(); const user = useUser({ or: 'redirect' }); return useQuery({ queryKey: ['notes'], queryFn: async (): Promise> => { const { data, error } = await postgrest .from('notes') .select('id, title, created_at, owner_id, shared') // .eq("owner_id", user.id) .order('created_at', { ascending: false }); if (error) { throw error; } return data; }, }); } ``` Check your two open Notes users, reload the page, and see what happens: Nothing happens. RLS is still in place, and isolation is maintained: no data leaks. 💪 ## Disable RLS Let's see what happens when we disable RLS on our notes and paragraphs tables. Go to your project in the Neon Console and in the SQL Editor run: ```sql ALTER TABLE public.notes DISABLE ROW LEVEL SECURITY; ALTER TABLE public.paragraphs DISABLE ROW LEVEL SECURITY; ``` Bob can see all of Alice's notes and paragraphs within them, including the 'inquisitive cyan' note where her birthday party is planned. Alice now knows about her birthday party. Disabling RLS removed all RLS policies, including the `crudPolicy` on `read` queries that helped enforce data isolation. Birthday surprise is _ruined_. ### Re-enable RLS Now, let's re-enable RLS from Neon: ```bash ALTER TABLE public.notes ENABLE ROW LEVEL SECURITY; ALTER TABLE public.paragraphs ENABLE ROW LEVEL SECURITY; ``` With RLS back on, there are no more data leaks, despite the lack of access control in the application code. In this case, RLS acts as a backstop, preventing unintended data exposure due to application-side mistakes. Order is restored, thanks to RLS. Now go fix your app before you forget: ```typescript function useNotes() { const postgrest = usePostgrest(); const user = useUser({ or: 'redirect' }); return useQuery({ queryKey: ['notes'], queryFn: async (): Promise> => { const { data, error } = await postgrest .from('notes') .select('id, title, created_at, owner_id, shared') .eq('owner_id', user.id) .order('created_at', { ascending: false }); if (error) { throw error; } return data; }, }); } ``` ## Appendix: Understanding RLS policies in Drizzle In this section, we provide an overview of the Row-Level Security (RLS) policies implemented in the Notes application, found in the `schema.ts` file. These policies are written in Drizzle, which now supports defining RLS policies alongside your schema in code. Writing RLS policies can be complex, so we worked with Drizzle to develop the `crudPolicy` function – a wrapper that works with Neon's predefined roles (`authenticated` and `anonymous`), letting you consolidate all policies that apply to a given role into a single function. See [Row-level Security](https://orm.drizzle.team/docs/rls) in the Drizzle docs for details. For the `notes` table, the `crudPolicy` function defines RLS policies for the `authenticated` role, which is assigned to users who have successfully logged in. The `read` and `modify` parameters use the `authUid` helper function to ensure that users can only read or modify rows where the `ownerId` matches their own `auth.user_id()` from the JWT. ```typescript // for `notes` table crudPolicy({ role: authenticatedRole, read: authUid(table.ownerId), modify: authUid(table.ownerId), }); ``` For the `paragraphs` table, the `crudPolicy` function also applies to the `authenticated` role. However, since paragraphs are linked to notes via the `noteId`, the `read` and `modify` parameters use a SQL subquery to check that the `owner_id` of the associated note matches the `auth.user_id()` from the JWT. This ensures that users can only read or modify paragraphs that belong to notes they own. ```typescript // for `paragraphs` table crudPolicy({ role: authenticatedRole, read: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, modify: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, }); ``` These policies together enforce strict access control at the database level, ensuring that users can only interact with their own notes and paragraphs, regardless of any application-side logic. ### Implementing Share Notes functionality In the `schema.ts` file, you can find additional RLS policies than defined above, which support the "Share Notes" functionality in the application. This feature allows users to share specific notes with others by setting the `shared` column to `true`. The RLS policies for the `notes` table include a condition that permits read access to notes marked as shared, regardless of ownership. The final RLS policy for the `notes` and `paragraphs` tables looks like this: ```typescript ...schema definitions... // for `notes` table crudPolicy({ role: authenticatedRole, read: authUid(table.ownerId), modify: authUid(table.ownerId), }), pgPolicy('shared_policy', { for: 'select', to: authenticatedRole, using: sql`${table.shared} = true`, }) // for `paragraphs` table crudPolicy({ role: authenticatedRole, read: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, modify: sql`(select notes.owner_id = auth.user_id() from notes where notes.id = ${table.noteId})`, }), pgPolicy('shared_policy', { for: 'select', to: authenticatedRole, using: sql`(select notes.shared from notes where notes.id = ${table.noteId})`, }); ``` The `shared_policy` enables any authenticated user to read notes marked as shared (`shared = true`), allowing others to view shared notes even if they are not the owner. This policy applies similarly to paragraphs, checking if the linked note is shared. Although RLS permits read access to shared notes for all authenticated users, the shared notes are not directly visible in other users' UI. Instead, sharing occurs via the "Share" button, which copies the note's URL to the clipboard. This URL includes the note's ID, enabling authenticated users to access the shared note and its paragraphs with-in in a read-only mode. ### RLS policies table To check out the RLS policies defined for the `notes` table in Postgres, run this query: ```sql SELECT * FROM pg_policies WHERE tablename = 'notes'; ``` Here is the output, showing columns `policyname, cmd, qual, with_check` only: ```sql policyname | cmd | qual | with_check --------------------------------+--------+---------------------------------------------+--------------------------------------- crud-authenticated-policy-select | SELECT | (SELECT (auth.user_id() = notes.owner_id)) | crud-authenticated-policy-insert | INSERT | | (SELECT (auth.user_id() = notes.owner_id)) crud-authenticated-policy-update | UPDATE | (SELECT (auth.user_id() = notes.owner_id)) | (SELECT (auth.user_id() = notes.owner_id)) crud-authenticated-policy-delete | DELETE | (SELECT (auth.user_id() = notes.owner_id)) | shared_policy | SELECT | (shared = true) | (5 rows) ``` To get an understanding of `auth.user_id()` and the role it plays in these policies, see this [explanation](https://neon.com/docs/guides/neon-rls#how-neon-rls-gets-authuserid-from-the-jwt). --- # Source: https://neon.com/llms/guides-row-level-security.txt # Row-Level Security with Neon > The "Row-Level Security with Neon" documentation explains how to implement row-level security policies in Neon databases, enabling fine-grained access control by restricting data visibility at the row level based on user roles and conditions. ## Source - [Row-Level Security with Neon HTML](https://neon.com/docs/guides/row-level-security): The original HTML version of this documentation What you will learn: - How the Data API uses Row-Level Security Related docs: - [Data API](https://neon.com/docs/data-api/get-started) - [Simplify RLS with Drizzle](https://neon.com/docs/guides/rls-drizzle) - [Postgres RLS Tutorial](https://neon.com/postgresql/postgresql-administration/postgresql-row-level-security) Row-Level Security (RLS) is a Postgres feature that controls access to individual rows in a table based on the current user. Here's a simple example that limits the `notes` a user can see by matching rows where their `user_id` matches the session's `auth.user_id()`: ```sql -- Enable RLS on a table ALTER TABLE notes ENABLE ROW LEVEL SECURITY; -- Create a policy that only allows users to access their own notes CREATE POLICY "users_can_only_access_own_notes" ON notes FOR ALL USING (auth.user_id() = user_id); ``` When using the Data API for client-side querying, RLS policies are required to secure your data. ## Data API with RLS The **Data API** turns your database tables on a given branch into a REST API, and it requires RLS policies on all tables to ensure your data is secure. ### How it works - The Data API handles JWT validation and provides the `auth.user_id()` function. - Your RLS policies use `auth.user_id()` to control access. - All tables accessed via the Data API must have RLS enabled. - [Get started](https://neon.com/docs/data-api/get-started): Learn how to enable and use the Data API with RLS policies - [Building a note-taking app](https://neon.com/docs/data-api/demo): See a complete example of the Data API with RLS in action ## RLS with Drizzle ORM Drizzle makes it simple to write RLS policies that work with the Data API. We highly recommend using its `crudPolicy` helper to simplify common RLS patterns. - [Simplify RLS with Drizzle](https://neon.com/docs/guides/rls-drizzle): Learn how to use Drizzle's crudPolicy function to simplify RLS policies ## Postgres RLS Tutorial To learn the fundamentals of Row-Level Security in Postgres, including detailed concepts and examples, see the Postgres tutorial: - [Postgres RLS Tutorial](https://neon.com/postgresql/postgresql-administration/postgresql-row-level-security): A complete guide to Postgres Row-Level Security concepts and implementation --- # Source: https://neon.com/llms/guides-ruby-on-rails.txt # Connect a Ruby on Rails application to Neon Postgres > This document guides users on configuring a Ruby on Rails application to connect with a Neon Postgres database, detailing the necessary steps and code adjustments for successful integration. ## Source - [Connect a Ruby on Rails application to Neon Postgres HTML](https://neon.com/docs/guides/ruby-on-rails): The original HTML version of this documentation [Ruby on Rails](https://rubyonrails.org/), also known simply as Rails, is an open-source web application framework written in Ruby. It uses a model-view-controller architecture, making it a good choice for developing database-backed web applications. This guide shows how to connect to a Ruby on Rails application to a Neon Postgres database. To connect to Neon from a Ruby on Rails application: **Note**: This guide was tested using Ruby v3.4.6 and Rails v8.0.3. ## Create a Neon Project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Rails Project Create a Rails project using the [Rails CLI](https://guides.rubyonrails.org/command_line.html), and specify PostgreSQL as the database type: ```shell gem install rails rails new neon-with-rails --database=postgresql ``` You now have a Rails project in a folder named `neon-with-rails`. ## Configure a PostgreSQL Database using Rails Create a `.env` file in the root of your Rails project, and add the connection string for your Neon compute. Do not specify a database name after the forward slash in the connection string. Rails will choose the correct database depending on the environment. ```shell DATABASE_URL=postgresql://[user]:[password]@[neon_hostname]/ ``` **Note**: You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). **Important**: The role you specified in the `DATABASE_URL` must have **CREATEDB** privileges. Roles created in the Neon Console, CLI, or API, including the default role created with a Neon project, are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which has the `CREATEDB` privilege. Alternatively, you can create roles with SQL to grant specific privileges. See [Manage database access](https://neon.com/docs/manage/database-access). Create the development database by issuing the following commands from the root of your project directory: ```shell # Load the DATABASE_URL into your session source .env # Create the development database bin/rails db:create ``` ## Create a Rails Controller to Query the Database Run the following command to create a controller and view. The controller will query the database version and supply it to the view file to render a web page that displays the PostgreSQL version. ```shell rails g controller home index ``` Replace the controller contents at `app/controllers/home_controller.rb` with: ```ruby class HomeController < ApplicationController def index @version = ActiveRecord::Base.connection.execute("SELECT version();").first['version'] end end ``` Replace the contents of the view file at `app/views/home/index.html.erb` with: ```ruby <% if @version %>

    <%= @version %>

    <% end %> ``` Replace the contents of `config/routes.rb` with the following code to serve your home view as the root page of the application: ```ruby Rails.application.routes.draw do get "up" => "rails/health#show", as: :rails_health_check # Defines the root path route ("/") root 'home#index' end ``` ## Run the application Start the application using the Rails CLI from the root of the project: ```shell bin/rails server -e development ``` Visit [localhost:3000/](http://localhost:3000/) in your web browser. Your Neon database's Postgres version will be displayed. For example: ``` PostgreSQL 15.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ## Schema migration with Ruby on Rails For schema migration with Ruby on Rails, see our guide: - [Ruby on Rails Migrations](https://neon.com/docs/guides/rails-migrations): Schema migration with Neon Postgres and Ruby on Rails --- # Source: https://neon.com/llms/guides-rust.txt # Connect a Rust application to Neon Postgres > This document guides users on how to connect a Rust application to a Neon database, detailing the necessary steps and code examples for establishing a successful connection. ## Source - [Connect a Rust application to Neon Postgres HTML](https://neon.com/docs/guides/rust): The original HTML version of this documentation This guide describes how to create a Neon project and connect to it from a Rust application using two popular Postgres drivers: [rust-postgres](https://crates.io/crates/postgres), a synchronous driver, and [tokio-postgres](https://crates.io/crates/tokio-postgres), an asynchronous driver for use with the [Tokio](https://tokio.rs/) runtime. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://console.neon.tech/signup). - The Rust toolchain. If you do not have it installed, install it from the [official Rust website](https://www.rust-lang.org/tools/install). ## Create a Neon project If you do not have one already, create a Neon project. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. Your project is created with a ready-to-use database named `neondb`. In the following steps, you will connect to this database from your Rust application. ## Create a Rust project For your Rust project, use `cargo` to create a new project and add the required library dependencies (called "crates"). 1. Create a project directory and change into it. ```bash cargo new neon-rust-quickstart cd neon-rust-quickstart ``` This command creates a new directory named `neon-rust-quickstart` containing a `src/main.rs` file for your code and a `Cargo.toml` file for your project's configuration and dependencies. > Open the directory in your preferred code editor (e.g., VS Code, RustRover, etc.) to edit the files. 2. Add the required crates using `cargo add`. Choose the set of commands for either a synchronous or asynchronous setup. Tab: postgres (sync) ```bash cargo add postgres postgres-openssl openssl dotenvy ``` Tab: tokio-postgres (async) ```bash cargo add tokio --features "tokio/full" tokio-postgres postgres-openssl openssl dotenvy ``` **Note** What are features?: The `--features` flag tells Cargo to enable optional functionality within a crate. Many crates are designed to be modular, and features allow you to include only the code you actually need. In this case, you are enabling the full Tokio runtime, which includes all components necessary for asynchronous programming. You can learn more about features in [The Cargo Book: Features](https://doc.rust-lang.org/cargo/reference/features.html). > Neon requires a secure SSL/TLS connection. In the commands above, the `postgres-openssl` crate provides the necessary OpenSSL bindings that both the synchronous `postgres` and asynchronous `tokio-postgres` drivers use to enable TLS. 3. Configure multiple executables. You will create separate Rust scripts for each of the CRUD operations (Create, Read, Update, Delete). Each script will be a separate binary target in your project (`create_table.rs`, `read_data.rs`, `update_data.rs`, and `delete_data.rs`). To manage separate CRUD examples (`create_table`, `read_data`, etc.), you need to tell Cargo that your project has multiple binary targets. Open your `Cargo.toml` file and add the following `[[bin]]` sections to the end of it: ```toml [[bin]] name = "create_table" path = "src/create_table.rs" [[bin]] name = "read_data" path = "src/read_data.rs" [[bin]] name = "update_data" path = "src/update_data.rs" [[bin]] name = "delete_data" path = "src/delete_data.rs" ``` You can now safely delete the default `src/main.rs` file. Since you have defined specific binary targets (like `create_table`) in `Cargo.toml`, Cargo no longer needs the default `main.rs` entry point. ```bash rm src/main.rs ``` ## Store your Neon connection string Create a file named `.env` in your project's root directory. This file will securely store your database connection string. 1. In the [Neon Console](https://console.neon.tech), select your project on the **Dashboard**. 2. Click **Connect** on your **Project Dashboard** to open the **Connect to your database** modal. 3. Copy the connection string, which includes your password. 4. Add the connection string to your `.env` file as shown below. ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` > Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your actual database credentials. ## Choosing the right method to execute SQL commands Before diving into the code examples, it's important to understand how to interact with your Neon database using Rust. The `postgres` and `tokio-postgres` crates provide several methods for executing SQL commands. Choosing the right method depends on your use case: - `client.execute:` Use this for a single DML/DDL statement (`INSERT`, `UPDATE`, `DELETE`) or a fire-and-forget query. It supports parameter placeholders (`$1`, `$2`, etc.) and returns the number of rows affected (`u64`). - `client.batch_execute:` Ideal for running multiple SQL commands in one shot (schema migrations, DDL, seed data). Supply a semicolon-separated SQL string. This method does not support parameters and returns `()` on success. - `client.query:` The go-to for any `SELECT` that returns rows. It accepts placeholders and returns a `Vec`, so you can iterate over rows and extract typed values. ### Quick comparison | Method | Use case | Parameters | Returns | | ---------------------- | --------------------------------------------------------------------- | ---------- | --------------------- | | `client.execute` | Single DML/DDL or ad-hoc query | Yes | `u64` (rows affected) | | `client.batch_execute` | Multiple statements in one SQL blob (DDL, migrations, seed data) | No | `()` | | `client.query` | Fetching rows from a `SELECT` | Yes | `Vec` | Now that you know how to connect to your Neon database and the available methods for executing SQL commands, let's look at some examples of how you can perform basic CRUD operations. You will use all three methods (`execute`, `batch_execute`, and `query`) in the examples to demonstrate their usage. ## Examples This section provides example Rust scripts that demonstrate how to connect to your Neon database and perform basic operations such as [creating a table](https://neon.com/docs/guides/rust#create-a-table-and-insert-data), [reading data](https://neon.com/docs/guides/rust#read-data), [updating data](https://neon.com/docs/guides/rust#update-data), and [deleting data](https://neon.com/docs/guides/rust#delete-data). ### Create a table and insert data In your project's `src` directory, create a file named `create_table.rs` and add the code for your preferred driver. This script connects to your Neon database, creates a table named `books`, and inserts some sample data into it. Tab: postgres (sync) ```rust use dotenvy::dotenv; use postgres::Client; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; fn main() -> Result<(), Box> { // Load environment variables from .env file dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let mut client = Client::connect(&conn_string, connector)?; println!("Connection established"); client.batch_execute("DROP TABLE IF EXISTS books;")?; println!("Finished dropping table (if it existed)."); client.batch_execute( "CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE );" )?; println!("Finished creating table."); // Insert a single book record client.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4)", &[&"The Catcher in the Rye", &"J.D. Salinger", &1951, &true], )?; println!("Inserted a single book."); // Start a transaction let mut transaction = client.transaction()?; println!("Starting transaction to insert multiple books..."); // Data to be inserted let books_to_insert = [ ("The Hobbit", "J.R.R. Tolkien", 1937, true), ("1984", "George Orwell", 1949, true), ("Dune", "Frank Herbert", 1965, false), ]; // Loop and insert within the transaction for book in &books_to_insert { transaction.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4)", &[&book.0, &book.1, &book.2, &book.3], )?; } // Commit the transaction transaction.commit()?; println!("Inserted 3 rows of data."); Ok(()) } ``` Tab: tokio-postgres (async) ```rust use tokio_postgres; use dotenvy::dotenv; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; #[tokio::main] async fn main() -> Result<(), Box> { // Load environment variables from .env file dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let (mut client, connection) = tokio_postgres::connect(&conn_string, connector).await?; println!("Connection established"); tokio::spawn(async move { if let Err(e) = connection.await { eprintln!("connection error: {}", e); } }); client.batch_execute("DROP TABLE IF EXISTS books;").await?; println!("Finished dropping table (if it existed)."); client.batch_execute( "CREATE TABLE books ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, author VARCHAR(255), publication_year INT, in_stock BOOLEAN DEFAULT TRUE );" ).await?; println!("Finished creating table."); // Insert a single book record client.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4)", &[&"The Catcher in the Rye", &"J.D. Salinger", &1951, &true], ).await?; println!("Inserted a single book."); // Start a transaction let transaction = client.transaction().await?; println!("Starting transaction to insert multiple books..."); // Data to be inserted let books_to_insert = [ ("The Hobbit", "J.R.R. Tolkien", 1937, true), ("1984", "George Orwell", 1949, true), ("Dune", "Frank Herbert", 1965, false), ]; // Loop and insert within the transaction for book in &books_to_insert { transaction.execute( "INSERT INTO books (title, author, publication_year, in_stock) VALUES ($1, $2, $3, $4)", &[&book.0, &book.1, &book.2, &book.3], ).await?; } // Commit the transaction transaction.commit().await?; println!("Inserted 3 rows of data."); Ok(()) } ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database using a secure TLS connection. - Drop the `books` table if it already exists to ensure a clean slate. - Create a new table named `books` with columns for `id`, `title`, `author`, `publication_year`, and `in_stock`. - Insert a single book record. - Start a transaction to insert multiple book records in a single operation. **Info** Why use a transaction for inserting multiple rows?: Unlike database drivers in some other languages that offer a single high-level method for bulk inserts (like [Python's](https://neon.com/docs/guides/python#create-a-table-and-insert-data) `executemany` in `psycopg2`), the idiomatic Rust approach is to loop through the data inside a transaction. This guarantees atomicity: all rows are inserted successfully, or none are inserted if an error occurs. Run the script using the following command: ```bash cargo run --bin create_table ``` When the code runs successfully, it produces the following output: ```text Connection established Finished dropping table (if it existed). Finished creating table. Inserted a single book. Starting transaction to insert multiple books... Inserted 3 rows of data. ``` ### Read data In your `src` directory, create a file named `read_data.rs`. This script connects to your database and retrieves all rows from the `books` table. Tab: postgres (sync) ```rust use dotenvy::dotenv; use postgres::Client; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let mut client = Client::connect(&conn_string, connector)?; println!("Connection established"); // Fetch all rows from the books table let rows = client.query("SELECT * FROM books ORDER BY publication_year;", &[])?; println!("\n--- Book Library ---"); for row in rows { let id: i32 = row.get("id"); let title: &str = row.get("title"); let author: &str = row.get("author"); let year: i32 = row.get("publication_year"); let in_stock: bool = row.get("in_stock"); println!("ID: {}, Title: {}, Author: {}, Year: {}, In Stock: {}", id, title, author, year, in_stock); } println!("--------------------\n"); Ok(()) } ``` Tab: tokio-postgres (async) ```rust use tokio_postgres; use dotenvy::dotenv; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; #[tokio::main] async fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let (client, connection) = tokio_postgres::connect(&conn_string, connector).await?; println!("Connection established"); tokio::spawn(async move { if let Err(e) = connection.await { eprintln!("connection error: {}", e); } }); // Fetch all rows from the books table let rows = client.query("SELECT * FROM books ORDER BY publication_year;", &[]).await?; println!("\n--- Book Library ---"); for row in rows { let id: i32 = row.get("id"); let title: &str = row.get("title"); let author: &str = row.get("author"); let year: i32 = row.get("publication_year"); let in_stock: bool = row.get("in_stock"); println!("ID: {}, Title: {}, Author: {}, Year: {}, In Stock: {}", id, title, author, year, in_stock); } println!("--------------------\n"); Ok(()) } ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database using a secure TLS connection. - Use a `client.query` method to fetch all rows from the `books` table, ordered by `publication_year`. - Print each book's details in a formatted output. Run the script using the following command: ```bash cargo run --bin read_data ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: false -------------------- ``` ### Update data In your `src` directory, create a file named `update_data.rs`. This script updates the stock status of the book 'Dune' to `true`. Tab: postgres (sync) ```rust use dotenvy::dotenv; use postgres::Client; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let mut client = Client::connect(&conn_string, connector)?; println!("Connection established"); // Update a data row in the table let updated_rows = client.execute( "UPDATE books SET in_stock = $1 WHERE title = $2", &[&true, &"Dune"], )?; if updated_rows > 0 { println!("Updated stock status for 'Dune'."); } else { println!("'Dune' not found or stock status already up to date."); } Ok(()) } ``` Tab: tokio-postgres (async) ```rust use tokio_postgres; use dotenvy::dotenv; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; #[tokio::main] async fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let (client, connection) = tokio_postgres::connect(&conn_string, connector).await?; println!("Connection established"); tokio::spawn(async move { if let Err(e) = connection.await { eprintln!("connection error: {}", e); } }); // Update a data row in the table let updated_rows = client.execute( "UPDATE books SET in_stock = $1 WHERE title = $2", &[&true, &"Dune"], ).await?; if updated_rows > 0 { println!("Updated stock status for 'Dune'."); } else { println!("'Dune' not found or stock status already up to date."); } Ok(()) } ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database using a secure TLS connection. - Use a `client.execute` method to update the stock status of the book 'Dune' to `true`. Run the script using the following command: ```bash cargo run --bin update_data ``` After running this script, you can run `read_data` again to verify that the row was updated. ```bash cargo run --bin read_data ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 3, Title: 1984, Author: George Orwell, Year: 1949, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true --------------------- ``` > You can see that the stock status for 'Dune' has been updated to `True`. ### Delete data In your `src` directory, create a file named `delete_data.rs`. This script deletes the book '1984' from the `books` table. Tab: postgres (sync) ```rust use dotenvy::dotenv; use postgres::Client; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let mut client = Client::connect(&conn_string, connector)?; println!("Connection established"); // Delete a data row from the table let deleted_rows = client.execute( "DELETE FROM books WHERE title = $1", &[&"1984"], )?; if deleted_rows > 0 { println!("Deleted the book '1984' from the table."); } else { println!("'1984' not found in the table."); } Ok(()) } ``` Tab: tokio-postgres (async) ```rust use tokio_postgres; use dotenvy::dotenv; use openssl::ssl::{SslConnector, SslMethod}; use postgres_openssl::MakeTlsConnector; use std::env; #[tokio::main] async fn main() -> Result<(), Box> { dotenv()?; let conn_string = env::var("DATABASE_URL")?; let builder = SslConnector::builder(SslMethod::tls())?; let connector = MakeTlsConnector::new(builder.build()); let (client, connection) = tokio_postgres::connect(&conn_string, connector).await?; println!("Connection established"); tokio::spawn(async move { if let Err(e) = connection.await { eprintln!("connection error: {}", e); } }); // Delete a data row from the table let deleted_rows = client.execute( "DELETE FROM books WHERE title = $1", &[&"1984"], ).await?; if deleted_rows > 0 { println!("Deleted the book '1984' from the table."); } else { println!("'1984' not found in the table."); } Ok(()) } ``` The above code does the following: - Load the connection string from the `.env` file. - Connect to the Neon database using a secure TLS connection. - Use a `client.execute` method to delete the book '1984' from the `books` table. Run the script using the following command: ```bash cargo run --bin delete_data ``` After running this script, run `read_data` again to verify that the row was deleted. ```bash cargo run --bin read_data ``` When the code runs successfully, it produces the following output: ```text Connection established --- Book Library --- ID: 2, Title: The Hobbit, Author: J.R.R. Tolkien, Year: 1937, In Stock: true ID: 1, Title: The Catcher in the Rye, Author: J.D. Salinger, Year: 1951, In Stock: true ID: 4, Title: Dune, Author: Frank Herbert, Year: 1965, In Stock: true -------------------- ``` > You can see that the book '1984' has been successfully deleted from the `books` table. ## Source code You can find the source code for the applications described in this guide on GitHub. - [Get started with Rust and Neon using postgres](https://github.com/neondatabase/examples/tree/main/with-rust-postgres): Get started with Rust and Neon using the synchronous postgres crate - [Get started with Rust and Neon using tokio-postgres](https://github.com/neondatabase/examples/tree/main/with-rust-tokio-postgres): Get started with Rust and Neon using the asynchronous tokio-postgres crate ## Resources - [The Rust Programming Language Book](https://doc.rust-lang.org/book/) - [rust-postgres crate documentation](https://docs.rs/postgres/latest/postgres/) - [tokio-postgres crate documentation](https://docs.rs/tokio-postgres/latest/tokio_postgres/) - [Tokio async runtime](https://tokio.rs/) --- # Source: https://neon.com/llms/guides-scale-to-zero-guide.txt # Configuring Scale to Zero for Neon computes > The document outlines the process for configuring the "Scale to Zero" feature in Neon, enabling users to automatically pause compute resources when not in use, optimizing resource management and cost efficiency. ## Source - [Configuring Scale to Zero for Neon computes HTML](https://neon.com/docs/guides/scale-to-zero-guide): The original HTML version of this documentation Neon's [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature controls whether a Neon compute transitions to an idle state due to inactivity. For example, if scale to zero is enabled, your compute will transition to an idle state after it's been inactive for 5 minutes. Neon's paid plans allow you to disable scale to zero to keep your compute active. On the Scale plan, you can configure the scale to zero threshold. **Important**: If you disable scale to zero entirely, your compute will remain active, and you will have to manually restart your compute to pick up the latest updates to Neon's compute images. Neon typically releases compute-related updates weekly. Not all releases contain critical updates, but a weekly compute restart is recommended to ensure that you do not miss anything important. For how to restart a compute, see [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). This guide demonstrates how to configure the scale to zero setting for a new project, for an existing project, or for an individual compute. ### Scale to zero limits Paid plans permit disabling scale to zero. On the Scale plan, you can configure the scale to zero threshold. | Plan | Scale to zero after | Can be disabled? | | :-------- | :----------------------------------- | :--------------- | | Free plan | 5 minutes | | | Launch | 5 minutes | ✓ | | Scale | Configurable (1 minute to always on) | ✓ | ## Enable or disable scale to zero To enable or disable scale to zero: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, click **Edit**. 1. Enable or disable the scale to zero setting, and save your selection. > Disabling scale to zero is only supported on paid plans. ### Configuring the scale to zero time On the Scale plan, you can configure "Scale to zero after" time to increase or decrease the amount of time after which a compute scales to zero. For example, decreasing the time to 1 minute means that your compute will scale to zero faster (after the compute is inactive for 1 minute), or increasing the value to an hour means that your compute will only scale to zero after being inactive for an hour. Initial configuration of the scale to zero time is only supported via an [Update compute endpoint](https://api-docs.neon.tech/reference/updateprojectendpoint#/) or [Update project](https://api-docs.neon.tech/reference/updateproject#/) API call. Use the `Update compute endpoint` API to change the setting for an existing compute. The `Update project` API sets a default for all compute endpoints created in the future — it does not change the configuration of existing computes. Tab: Update compute endpoint ```bash # change the setting for an existing compute curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/{project-id}/endpoints/{endpoint-id} \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "endpoint": { "suspend_timeout_seconds": 60 } } ' ``` Tab: Update project ```bash # Change the default setting for computes created in the future curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/{project-id} \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "default_endpoint_settings": { "suspend_timeout_seconds": 60 } } } ' ``` **API parameters:** - The `suspend_timeout_seconds` setting is defined in seconds - The default setting is 300 seconds (5 minutes) - The minimum setting is 60 seconds - The maximum setting is 604800 seconds (1 week) - You must supply an [API key](https://neon.com/docs/manage/api-keys), your [project ID](https://neon.com/docs/reference/glossary#project-id), and the [endpoint ID](https://neon.com/docs/reference/glossary#endpoint-id) After configuring a non-default value via the Neon API, you'll be able to adjust the setting via the console. Setting a non-default value makes the time selector control visible on the **Edit compute** modal. ### Configure the scale to zero default Configuring the scale to zero setting in your project's settings sets the project's default, which is applied to all computes created from that point forward. The scale to zero settings for existing computes are unaffected. See [Change your project's default compute settings](https://neon.com/docs/manage/projects#change-your-projects-default-compute-settings) for more info about compute defaults. To configure the scale to zero default for an existing project: 1. Select a project in the Neon Console. 1. On the **Dashboard**, select **Settings**. 1. Navigate to the **Compute defaults** section. 1. Select **Modify defaults**. 1. Enable or disable the scale to zero setting, and save your selection. ## Monitor scale to zero You can monitor scale to zero on the **Branches** page in the Neon Console. A compute reports either an **Active** or **Idle** status. You can also view compute state transitions in the **Branches** widget on the Neon **Dashboard**. User actions that activate an idle compute include [connecting from a client such as psql](https://neon.com/docs/connect/query-with-psql-editor), running a query on your database from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), or accessing the compute via the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Info**: The Neon API includes a [Start endpoint](https://api-docs.neon.tech/reference/startprojectendpoint) method for the specific purpose of activating and suspending a compute. You can try any of these methods and watch the status of your compute as it transitions from an **Idle** to an **Active** state. ## Session context considerations When a compute suspends and later restarts, the [session context](https://neon.com/docs/reference/compatibility#session-context) resets. This includes in-memory statistics, temporary tables, prepared statements, and autovacuum thresholds, among other session-specific data. If your workflow requires persistent session data, consider disabling scale to zero on a paid plan to keep your compute active continuously. On the Free plan, scale to zero is always enabled and automatically suspends your compute after 5 minutes of inactivity. --- # Source: https://neon.com/llms/guides-schema-diff-tutorial.txt # Schema diff tutorial > The "Schema diff tutorial" document guides Neon users through the process of comparing and identifying differences between database schemas using Neon's schema diff tool. ## Source - [Schema diff tutorial HTML](https://neon.com/docs/guides/schema-diff-tutorial): The original HTML version of this documentation In this guide we will create an initial schema on a new database called `people` on our `production` branch. We'll then create a development branch called `feature/address`, following one possible convention for naming feature branches. After making schema changes on `feature/address`, we'll use the **Schema Diff** tool on the **Branches** page to get a side-by-side, GitHub-style visual comparison between the `feature/address` development branch and `production`. ## Before you start To complete this tutorial, you'll need: - A Neon account. Sign up [here](https://neon.com/docs/get-started/signing-up). - To interact with your Neon database from the command line: - Install the [Neon CLI](https://neon.com/docs/reference/cli-install) - Download and install the [psql](https://www.postgresql.org/download/) client ## Create the Initial Schema First, create a new database called `people` on the `production` branch and add some sample data to it. Tab: Console 1. Create the database. In the **Neon Console**, go to **Databases** → **New Database**. Make sure your `production` branch is selected, then create the new database called `people`. 2. Add the schema. Go to the **SQL Editor**, enter the following SQL statement and click **Run** to apply. ```sql CREATE TABLE person ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE NOT NULL ); ``` Tab: CLI 1. Create the database. Use the following CLI command to create the `people` database. ```bash neon databases create --name people ``` **Note**: If you have multiple projects, include `--project-id`. Or set the project context so you don't have to specify project id in every command. Example: ```bash neon set-context --project-id empty-glade-66712572 ``` You can find your project ID on the **Settings** page in the Neon Console. 1. Copy your connection string: ```bash neon connection-string --database-name people ``` 1. Connect to the `people` database with psql: ```bash psql 'postgresql://neondb_owner:*********@ep-crimson-frost-a5i6p18z.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require' ``` 1. Create the schema: ```sql CREATE TABLE person ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE NOT NULL ); ``` Tab: API 1. Use the [Create database](https://api-docs.neon.tech/reference/createprojectbranchdatabase) API to create the `people` database, specifying the `project_id`, `branch_id`, database `name`, and database `owner_name` in the API call. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/royal-band-06902338/branches/br-bitter-bird-a56n6lh4/databases \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "database": { "name": "people", "owner_name": "alex" } }' ``` 2. Retrieve your database connection string using [Get connection URI](https://api-docs.neon.tech/reference/getconnectionuri) endpoint, specifying the required `project_id`, `branch_id`, `database_name`, and `role_name` parameters. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/royal-band-06902338/connection_uri?branch_id=br-bitter-bird-a56n6lh4&database_name=people&role_name=alex' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` The API call will return an connection string similar to this one: ```json { "uri": "postgresql://alex:*********@ep-green-surf-a5yaumj3-pooler.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require" } ``` 3. Connect to the `people` database with `psql`: ```bash psql 'postgresql://alex:*********@ep-green-surf-a5yaumj3-pooler.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require' ``` 4. Create the schema: ```sql CREATE TABLE person ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE NOT NULL ); ``` ## Create a development branch Create a new development branch off of `production`. This branch will be an exact, isolated copy of `production`. For the purposes of this tutorial, name the branch `feature/address`, which could work as a good convention for creating isolated branches for working on specific features. Tab: Console 1. Create the development branch On the **Branches** page, click **Create Branch**, making sure of the following: - Select `production` as the default branch. - Name the branch `feature/address`. 1. Verify the schema on your new branch From the **SQL Editor**, use the meta-command `\d person` to inspect the schema of the `person` table. Make sure that the `people` database on the branch `feature/address` is selected. Tab: CLI 1. Create the branch If you're still in `psql`, exit using `\q`. Using the Neon CLI, create the development branch. Include `--project-id` if you have multiple projects. ```bash neon branches create --name feature/address --parent production ``` 1. Verify the schema To verify that this branch includes the initial schema created on `production`, connect to `feature/address`, then view the `person` table. 1. Get the connection string for the `people` database on branch `feature/address` using the CLI. ```bash neon connection-string feature/address --database-name people ``` This gives you the connection string which you can then copy. ```bash postgresql://neondb_owner:*********@ep-hidden-rain-a5pe72oi.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require ``` 1. Connect to `people` using psql. ```bash psql 'postgresql://neondb_owner:*********@ep-hidden-rain-a5pe72oi.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require' ``` 1. View the schema for the `person` table we created earlier. ```bash \d person ``` Which shows you the schema: ```bash Table "public.person" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+------------------------------------ id | integer | | not null | nextval('person_id_seq'::regclass) name | text | | not null | email | text | | not null | Indexes: "person_pkey" PRIMARY KEY, btree (id) "person_email_key" UNIQUE CONSTRAINT, btree (email) ``` You can do the same thing for your `production` branch and get identical results. Tab: API Using the [Create branch](https://api-docs.neon.tech/reference/createprojectbranch) API, create a development branch named `feature/address`. You'll need to specify the `project_id`, `parent_id`, branch `name`, and add a `read_write` compute (you need a compute to connect to the branch). ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/royal-band-06902338/branches \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "branch": { "name": "feature/address", "parent_id": "br-bitter-bird-a56n6lh4" }, "endpoints": [ { "type": "read_write" } ] }' ``` ## Update schema on a dev branch Let's introduce some differences between the two branches. Add a new table to store addresses on the `feature/address` branch. Tab: Console In the **SQL Editor**, make sure you select `feature/address` as the branch and `people` as the database. Enter this SQL statement to create a new `address` table. ```sql CREATE TABLE address ( id SERIAL PRIMARY KEY, person_id INTEGER NOT NULL, street TEXT NOT NULL, city TEXT NOT NULL, state TEXT NOT NULL, zip_code TEXT NOT NULL, FOREIGN KEY (person_id) REFERENCES person(id) ); ``` Tab: CLI 1. Connect to your `feature/address` branch By adding `--psql` to the CLI command, you can start the `psql` connection without having to enter the connection string directly: ```bash neon connection-string feature/address --database-name people --psql ``` Response: ```bash INFO: Connecting to the database using psql... psql (16.1, server 16.2) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. people=> ``` 1. Add a new address table ```sql CREATE TABLE address ( id SERIAL PRIMARY KEY, person_id INTEGER NOT NULL, street TEXT NOT NULL, city TEXT NOT NULL, state TEXT NOT NULL, zip_code TEXT NOT NULL, FOREIGN KEY (person_id) REFERENCES person(id) ); ``` Tab: API 1. Retrieve the database connection string for the `feature/address` branch using [Get connection URI](https://api-docs.neon.tech/reference/getconnectionuri) endpoint: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/royal-band-06902338/connection_uri?branch_id=br-mute-dew-a5930esi&database_name=people&role_name=alex' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` The API call will return an connection string similar to this one: ```json { "uri": "postgresql://alex:*********@ep-hidden-sun-a5de9i5h-pooler.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require" } ``` 1. Connect to the `people` database on the `feature/address` branch with `psql`: ```bash psql 'postgresql://alex:*********@ep-hidden-sun-a5de9i5h-pooler.us-east-2.aws.neon.tech/people?sslmode=require&channel_binding=require' ``` 1. Add a new `address` table. ```sql CREATE TABLE address ( id SERIAL PRIMARY KEY, person_id INTEGER NOT NULL, street TEXT NOT NULL, city TEXT NOT NULL, state TEXT NOT NULL, zip_code TEXT NOT NULL, FOREIGN KEY (person_id) REFERENCES person(id) ); ``` ## View the schema differences Now that you have some differences between your branches, you can view the schema differences. Tab: Console 1. Click on `feature/address` to open the detailed view, then click **Schema diff**. 1. Make sure you select `people` as the database and then click **Compare**. You will see the schema differences between `feature/address` and its parent `production`, including the new address table that we added to the `feature/address` branch. You can also launch Schema Diff from the **Restore** page, usually as part of verifying schemas before you restore a branch to its own or another branch's history. See [Instant restore](https://neon.com/docs/guides/branch-restore) for more info. Tab: CLI Compare the schema of `feature/address` to its parent branch using the `schema-diff` command. ```bash neon branches schema-diff production feature/address --database people ``` The result shows a comparison between the `feature/address` branch and its parent branch for the database `people`. The output indicates that the `address` table and its related sequences and constraints have been added in the `feature/address` branch but are not present in its parent branch `production`. ```diff --- Database: people (Branch: br-falling-dust-a5bakdqt) +++ Database: people (Branch: br-morning-heart-a5ltt10i) @@ -20,8 +20,46 @@ SET default_table_access_method = heap; -- +-- Name: address; Type: TABLE; Schema: public; Owner: neondb_owner +-- + +CREATE TABLE public.address ( + id integer NOT NULL, + person_id integer NOT NULL, + street text NOT NULL, + city text NOT NULL, + state text NOT NULL, + zip_code text NOT NULL +); + + +ALTER TABLE public.address OWNER TO neondb_owner; + +... ``` Tab: API Compare the schema of the `feature/address` branch to its parent branch using the `compare-schema` API. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/royal-band-06902338/branches/br-mute-dew-a5930esi/compare_schema?base_branch_id=br-bitter-bird-a56n6lh4&db_name=neondb' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' | jq -r '.diff' ``` | Parameter | Description | Required | Example | | ------------------ | -------------------------------------------------------------------------- | -------- | ------------------------- | | `` | The ID of your Neon project. | Yes | `royal-band-06902338` | | `` | The ID of the target branch to compare. | Yes | `br-mute-dew-a5930esi` | | `` | The ID of the base branch for comparison — the parent branch in this case. | Yes | `br-bitter-bird-a56n6lh4` | | `` | The name of the database in the target branch. | Yes | `people` | | `Authorization` | Bearer token for API access (your [Neon API key](https://neon.com/docs/manage/api-keys)) | Yes | `$NEON_API_KEY` | **Note**: The optional `jq -r '.diff'` command extracts the diff field from the JSON response and outputs it as plain text to make it easier to read. This command would not be necessary when using the endpoint programmatically. The result shows a comparison between the `feature/address` branch and its parent branch for the database `people`. The output indicates that the `address` table and its related sequences and constraints have been added to the `feature/address` branch but are not present in its parent branch. ```diff --- a/people +++ b/people @@ -21,6 +21,44 @@ SET default_table_access_method = heap; -- +-- Name: address; Type: TABLE; Schema: public; Owner: alex +-- + +CREATE TABLE public.address ( + id integer NOT NULL, + person_id integer NOT NULL, + street text NOT NULL, + city text NOT NULL, + state text NOT NULL, + zip_code text NOT NULL +); + + +ALTER TABLE public.address OWNER TO alex; + +-- +-- Name: address_id_seq; Type: SEQUENCE; Schema: public; Owner: alex +-- + +CREATE SEQUENCE public.address_id_seq + AS integer + START WITH 1 + INCREMENT BY 1 + NO MINVALUE + NO MAXVALUE + CACHE 1; + + +ALTER SEQUENCE public.address_id_seq OWNER TO alex; + +-- +-- Name: address_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: alex +-- + +ALTER SEQUENCE public.address_id_seq OWNED BY public.address.id; + + +-- -- Name: person; Type: TABLE; Schema: public; Owner: alex -- @@ -56,6 +94,13 @@ -- +-- Name: address id; Type: DEFAULT; Schema: public; Owner: alex +-- + +ALTER TABLE ONLY public.address ALTER COLUMN id SET DEFAULT nextval('public.address_id_seq'::regclass); + + +-- -- Name: person id; Type: DEFAULT; Schema: public; Owner: alex -- @@ -63,6 +108,14 @@ -- +-- Name: address address_pkey; Type: CONSTRAINT; Schema: public; Owner: alex +-- + +ALTER TABLE ONLY public.address + ADD CONSTRAINT address_pkey PRIMARY KEY (id); + + +-- -- Name: person person_email_key; Type: CONSTRAINT; Schema: public; Owner: alex -- @@ -79,6 +132,14 @@ -- +-- Name: address address_person_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: alex +-- + +ALTER TABLE ONLY public.address + ADD CONSTRAINT address_person_id_fkey FOREIGN KEY (person_id) REFERENCES public.person(id); + + +-- -- Name: DEFAULT PRIVILEGES FOR SEQUENCES; Type: DEFAULT ACL; Schema: public; Owner: cloud_admin -- ``` --- # Source: https://neon.com/llms/guides-schema-diff.txt # Schema diff > The "Schema diff" documentation for Neon explains how to compare and identify differences between database schemas, facilitating schema management and version control within Neon's platform. ## Source - [Schema diff HTML](https://neon.com/docs/guides/schema-diff): The original HTML version of this documentation Neon's Schema Diff tool lets you compare an SQL script of the schemas for two selected branches in a side-by-side view (or line-by-line on mobile devices). ## How Schema Diff works Schema Diff is available in the Neon Console for use in two ways: - Compare a branch's schema to its parent - Compare selected branches during an instant restore operation You can also use the `branches schema-diff` command in the Neon CLI or `compare-schema` endpoint in the Neon API to effect a variety of comparisons. ### Compare to parent In the detailed view for any child branch, you can check the schema differences between the selected branch and its parent. Use this view to verify the state of these schemas before you [Reset from parent](https://neon.com/docs/guides/reset-from-parent). ### Compare to another branch's history Built into the Time Travel assist editor, you can use Schema Diff to help when restoring branches, letting you compare states of your branch against its own or another branch's history before you complete a [branch restore](https://neon.com/docs/guides/branch-restore) operation. ### Comparisons using the CLI or API You can use the Neon CLI to compare a branch to any point in its own or any other branch's history. The `branches schema-diff` command offers full flexibility for any type of schema comparison: between a branch and its parent, a branch and its earlier state, or a branch to the head or prior state of another branch. The Neon API provides a `compare-schema` endpoint that lets you compare schemas between Neon branches programmatically, supporting CI/CD automation and AI agent use cases. ### Practical Applications - **Pre-Migration Reviews**: Before migrating schemas from a development branch into main, use Schema Diff to ensure only intended schema changes are applied. - **Audit Changes**: Historically compare schema changes to understand the evolution of your database structure. - **Consistency Checks**: Ensure environment consistency by comparing schemas across development, staging, and production branches. - **Automation**: Integrate schema-diff into CI/CD pipelines to automatically compare schemas during deployments. - **AI Agents**: Enable AI agents to retrieve schema differences programmatically to support agent-driven database migrations. ## How to Use Schema Diff You can launch the Schema Diff viewer from the **Branches** and **Restore** pages in the Neon Console. ### From the Branches page Open the detailed view for the branch whose schema you want to inspect. In the row of details for the parent branch, under the **COMPARE TO PARENT** block, click **Open schema diff**. ### From the Restore page Just like with [Time Travel Assist](https://neon.com/docs/guides/branch-restore#using-time-travel-assist), your first step is to choose the branch you want to restore, then choose where you want to restore from: **From history** (its own history) or **From another branch** (from another branch's history). Click the **Schema Diff** button, verify that your selections are correct, then click **Compare**. The two-pane view shows the schema for both your target and your selected branches. ### Using the Neon CLI You can use the Neon CLI to: - Compare the latest schemas of any two branches - Compare against a specific point in its own or another branch's history Use the `schema-diff` subcommand from the `branches` command: ```bash neon branches schema-diff [base-branch] [compare-source[@(timestamp|lsn)]] ``` The operation will compare a selected branch (`[compare-source]`) against the latest (head) of your base branch (`[base-branch]`). For example, if you want to compare recent changes you made to your development branch `development` against your production branch `production`, identify `production` as your base branch and `development` as your compare-source. ```bash neon branches schema-diff production development ``` You have a few options here: - Append a timestamp or LSN to compare to a specific point in `development` branch's history. - If you are regularly comparing development branches against `production`, include `production` in your `set-context` file. You can then leave out the [base-branch] from the command. - Use aliases to shorten the command. - Include `--database` to reduce the diff to a single database. If you don't specify a database, the diff will include all databases on the branch. Here is the same command using aliases, with `production` included in `set-context`, pointing to an LSN from `development` branch's history, and limiting the diff to the database `people`: ```bash neon branch sd development@0/123456 --db people ``` To find out what other comparisons you can make, see [Neon CLI commands — branches](https://neon.com/docs/reference/cli-branches#schema-diff) for full documentation of the command. ### Using the Neon API The [compare_schema](https://api-docs.neon.tech/reference/getprojectbranchschemacomparison) endpoint lets you compare schemas between Neon branches to track schema changes. The response highlights differences in a `diff` format, making it a useful tool for integrating schema checks into CI/CD workflows. Another use case for schema diff via the Neon API is AI agent-driven workflows. The `compare_schema` endpoint allows AI agents to programmatically retrieve schema differences by comparing two branches. To compare schemas between two branches, you can cURL command similar to the one below, which compares the schema of a target branch to the schema of a base branch. For example, the target branch could be a development branch where a schema change was applied, and the base branch could be the parent of the development branch. By comparing the two, you can inspect the changes that have been made on the development branch. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/wispy-butterfly-25042691/branches/br-rough-boat-a54bs9yb/compare_schema?base_branch_id=br-royal-star-a54kykl2&db_name=neondb' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' | jq -r '.diff' ``` The `compare_schema` endpoint supports the following parameters: | Parameter | Description | Required | Example | | ------------------ | ----------------------------------------------------------------------------- | -------- | -------------------------- | | `` | The ID of your Neon project. | Yes | `wispy-butterfly-25042691` | | `` | The ID of the target branch to compare — the branch with the modified schema. | Yes | `br-rough-boat-a54bs9yb` | | `` | The ID of the base branch for comparison. | Yes | `br-royal-star-a54kykl2` | | `` | The name of the database in the target branch. | Yes | `neondb` | | `lsn` | The LSN on the target branch for which the schema is retrieved. | No | `0/1EC5378` | | `timestamp` | The point in time on the target branch for which the schema is retrieved. | No | `2022-11-30T20:09:48Z` | | `base_lsn` | The LSN for the base branch schema. | No | `0/2FC6321` | | `base_timestamp` | The point in time for the base branch schema. | No | `2022-11-30T20:09:48Z` | | `Authorization` | Bearer token for API access (your [Neon API key](https://neon.com/docs/manage/api-keys)) | Yes | `$NEON_API_KEY` | **Note** notes: - The optional `jq -r '.diff'` command appended to the example above extracts the diff field from the JSON response and outputs it as plain text to make it easier to read. This command is not necessary when using the endpoint programmatically. - `timestamp` or `lsn` / `base_timestamp` or `base_lsn` values can be used to compare schemas as they existed as a precise time or [LSN](https://neon.com/docs/reference/glossary#lsn). - `timestamp` / `base_timestamp` values must be provided in [RFC 3339 format](https://tools.ietf.org/html/rfc3339#section-5.6) (Date and Time on the Internet: Timestamps - RFC 3339 specification for timestamp formats used in Internet protocols.). Here's an example of the `compare_schema` diff output for the `neondb` database after comparing target branch `br-rough-boat-a54bs9yb` with the base branch `br-royal-star-a54kykl2`. ```diff --- a/neondb +++ b/neondb @@ -27,7 +27,8 @@ CREATE TABLE public.playing_with_neon ( id integer NOT NULL, name text NOT NULL, - value real + value real, + created_at timestamp without time zone DEFAULT CURRENT_TIMESTAMP ); ``` **Output explanation:** - `-` (minus) identifies Lines that were removed from the base branch schema. - `+` (plus) identifies lines that were added in the target branch schema. In the example above, the `created_at` column was added to the `public.playing_with_neon` table on the target branch. ## Schema Diff GitHub Action Neon supports a [Schema Diff GitHub Action](https://neon.com/docs/guides/branching-github-actions#schema-diff-action) that performs a database schema diff on specified Neon branches for each pull request and writes a comment to the pull request highlighting the schema differences. This action supports workflows where schema changes are made on a branch. When you create or update a pull request containing schema changes, the action automatically generates a comment within the pull request. By including the schema diff as part of the comment, reviewers can easily assess the changes directly within the pull request. To learn more, see the [Schema Diff GitHub Action](https://neon.com/docs/guides/branching-github-actions#schema-diff-action). ## Tutorial For a step-by-step guide showing you how to compare two development branches using Schema Diff, see [Schema diff tutorial](https://neon.com/docs/guides/schema-diff-tutorial). --- # Source: https://neon.com/llms/guides-sequelize.txt # Schema migration with Neon Postgres and Sequelize > This document outlines the process of performing schema migrations in Neon Postgres using Sequelize, detailing the steps for setting up and executing migrations within a Neon environment. ## Source - [Schema migration with Neon Postgres and Sequelize HTML](https://neon.com/docs/guides/sequelize): The original HTML version of this documentation [Sequelize](https://sequelize.org/) is a promise-based Node.js ORM that supports multiple relational databases. In this guide, we'll explore how to use `Sequelize` ORM with a Neon Postgres database in a JavaScript project. We'll create a Node.js application, configure `Sequelize`, and show how to set up and run migrations with `Sequelize`. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed on your local machine. We'll use Node.js to build and test the application locally. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select an existing project or click the `New Project` button to create a new one. ### Retrieve your Neon database connection string You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. **Note**: Neon supports both direct and pooled database connection strings. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting Up the Node application ### Create a new Node project We'll create a simple catalog with API endpoints that query the database for authors and a list of their books. Run the following commands in your terminal to set up a new project using `Express.js`: ```bash mkdir neon-sequelize-guide && cd neon-sequelize-guide npm init -y && touch .env index.js npm install express dotenv ``` Add the `DATABASE_URL` environment variable to the `.env` file, which you'll use to connect to your Neon database. Use the connection string that you obtained from the Neon Console earlier: ```bash # .env DATABASE_URL=NEON_DATABASE_CONNECTION_STRING ``` To use the `Sequelize` ORM to run queries, we need to install the `sequelize` package and the `pg` driver to connect to Postgres from Node.js. We also need to install the `sequelize-cli` package to manage data models and run migrations. Run the following commands to install the required packages: ```bash npm install sequelize pg pg-hstore npm install sequelize-cli --save-dev ``` ### Configure Sequelize Run the following command to initialize the `sequelize` configuration: ```bash npx sequelize init ``` This command creates `config`, `migrations`, `models`, and `seeders` directories at the project root. The `config` directory contains the `config.json` file, which holds the database configuration. We want to have the database URL read as an environment variable, so we replace it with a `config.js` file. Create a `config.js` file in your `config/` directory and add the following code: ```javascript // config/config.js const dotenv = require('dotenv'); dotenv.config(); module.exports = { development: { url: process.env.DATABASE_URL, dialect: 'postgres', dialectOptions: { ssl: { require: true } }, }, }; ``` To make the `sequelize` CLI aware of the path to the new configuration file, we need to create a `.sequelizerc` file at the project root and add the following code: ```javascript // .sequelizerc const path = require('path'); module.exports = { config: path.resolve('config', 'config.js'), }; ``` ### Create models and set up migrations We'll create an `Author` and a `Book` model to represent the tables in our database. Run the following commands to create the models: ```bash npx sequelize model:generate --name Author --attributes name:string,bio:string npx sequelize model:generate --name Book --attributes title:string ``` Sequelize creates a new file for each model in the `models/` directory and a corresponding migration file in the `migrations/` directory. Sequelize automatically adds an `id` field as the primary key for each model, and `createdAt` and `updatedAt` fields to track the creation and update times of each record. We still need to define the relationships between the `Author` and `Book` models. Update the `book.js` file with the following code: ```javascript // models/book.js 'use strict'; const { Model } = require('sequelize'); module.exports = (sequelize, DataTypes) => { class Book extends Model { static associate(models) { Book.belongsTo(models.Author, { foreignKey: 'authorId', as: 'author', onDelete: 'CASCADE', }); } } Book.init( { title: { type: DataTypes.STRING, allowNull: false }, authorId: { type: DataTypes.INTEGER, allowNull: false }, }, { sequelize, modelName: 'Book', } ); return Book; }; ``` Sequelize does not automatically regenerate the migration files when you update the models. So, we need to manually update the migration files to add the foreign key constraint. Update the migration file corresponding to the `Book` model with the following code: ```javascript 'use strict'; /** @type {import('sequelize-cli').Migration} */ module.exports = { async up(queryInterface, Sequelize) { await queryInterface.createTable('Books', { id: { allowNull: false, autoIncrement: true, primaryKey: true, type: Sequelize.INTEGER, }, title: { type: Sequelize.STRING, }, createdAt: { allowNull: false, type: Sequelize.DATE, }, updatedAt: { allowNull: false, type: Sequelize.DATE, }, authorId: { type: Sequelize.INTEGER, onDelete: 'CASCADE', references: { model: 'Authors', key: 'id', }, }, }); }, async down(queryInterface, Sequelize) { await queryInterface.dropTable('Books'); }, }; ``` Run the following command to apply the migrations and create the tables in the database: ```bash npx sequelize db:migrate ``` If `Sequlize` successfully connects to the database and runs the migrations, you should see a success message in the terminal. ### Add sample data to the database We'll add some sample data to the database using the `Sequelize` ORM. Create a new file named `seed.js` at the project root and add the following code: ```javascript // seed.js const { Sequelize, DataTypes } = require('sequelize'); const { config } = require('dotenv'); config(); if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL is not set'); } const sequelize = new Sequelize(process.env.DATABASE_URL, { dialectOptions: { ssl: { require: true, }, }, }); const Author = require('./models/author')(sequelize, DataTypes); const Book = require('./models/book')(sequelize, DataTypes); const seedDatabase = async () => { const author = await Author.create({ name: 'J.K. Rowling', bio: 'The creator of the Harry Potter series', }); await Book.create({ title: "Harry Potter and the Philosopher's Stone", authorId: author.id }); await Book.create({ title: 'Harry Potter and the Chamber of Secrets', authorId: author.id }); const author2 = await Author.create({ name: 'J.R.R. Tolkien', bio: 'The creator of Middle-earth and author of The Lord of the Rings.', }); await Book.create({ title: 'The Hobbit', authorId: author2.id }); await Book.create({ title: 'The Fellowship of the Ring', authorId: author2.id }); await Book.create({ title: 'The Two Towers', authorId: author2.id }); await Book.create({ title: 'The Return of the King', authorId: author2.id }); const author3 = await Author.create({ name: 'George R.R. Martin', bio: 'The author of the epic fantasy series A Song of Ice and Fire.', }); await Book.create({ title: 'A Game of Thrones', authorId: author3.id }); await Book.create({ title: 'A Clash of Kings', authorId: author3.id }); await sequelize.close(); }; seedDatabase(); ``` Run the following command to seed the database with the sample data: ```bash node seed.js ``` Sequelize will print logs to the terminal as it connects to the database and adds the sample data. ### Create API endpoints Now that the database is set up and populated with data, we can implement the API to query the authors and their books. We'll use [Express](https://expressjs.com/), which is a minimal web application framework for Node.js. Create an `index.js` file at the project root, and add the following code to set up your Express server: ```javascript // index.js const express = require('express'); const { Sequelize, DataTypes } = require('sequelize'); const { config } = require('dotenv'); config(); if (!process.env.DATABASE_URL) { throw new Error('DATABASE_URL is not set'); } const sequelize = new Sequelize(process.env.DATABASE_URL, { dialectOptions: { ssl: { require: true } }, }); // Set up the models const Author = require('./models/author')(sequelize, DataTypes); const Book = require('./models/book')(sequelize, DataTypes); // Create a new Express application const app = express(); const port = process.env.PORT || 3000; app.get('/', async (req, res) => { res.send('Hello World! This is a book catalog.'); }); app.get('/authors', async (req, res) => { try { const authors = await Author.findAll(); res.json(authors); } catch (error) { console.error('Error fetching authors:', error); res.status(500).send('Error fetching authors'); } }); app.get('/books/:author_id', async (req, res) => { const authorId = parseInt(req.params.author_id); try { const books = await Book.findAll({ where: { authorId: authorId, }, }); res.json(books); } catch (error) { console.error('Error fetching books for author:', error); res.status(500).send('Error fetching books for author'); } }); // Start the server app.listen(port, () => { console.log(`Server running on http://localhost:${port}`); }); ``` This code sets up a simple API with two endpoints: `/authors` and `/books/:authorId`. The `/authors` endpoint returns a list of all the authors, and the `/books/:authorId` endpoint returns a list of books written by the specific author for the given `authorId`. Run the application using the following command: ```bash node index.js ``` This will start the server at `http://localhost:3000`. Navigate to `http://localhost:3000/authors` and `http://localhost:3000/books/1` in your browser to check that the API works as expected. ## Conclusion In this guide, we set up a new Javascript project using `Express.js` and the `Sequelize` ORM, and connected it to a `Neon` Postgres database. We created a schema for the database, generated and ran migrations, and implemented API endpoints to query the database. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and Sequelize](https://github.com/neondatabase/guide-neon-sequelize): Run Neon database migrations using Sequelize ## Resources For more information on the tools used in this guide, refer to the following resources: - [Sequelize](https://sequelize.org/) - [Express.js](https://expressjs.com/) --- # Source: https://neon.com/llms/guides-sequin.txt # Stream changes from your Neon database to anywhere > The document outlines how to stream changes from a Neon database to external destinations using Sequin, detailing setup instructions and configuration options specific to Neon's environment. ## Source - [Stream changes from your Neon database to anywhere HTML](https://neon.com/docs/guides/sequin): The original HTML version of this documentation Neon's Logical Replication features makes it possible to detect every change in your database. It can be used to power read-replicas and backups, but can also be used to add streaming characteristics to Neon. [Sequin](https://github.com/sequinstream/sequin) uses Neon's logical replication to sends records and changes in your database to your applications and services, in real-time. It's designed to never miss an `insert`, `update`, or `delete` and provide exactly-once processing of all changes. Changes are sent as messages via HTTP push (webhooks) or pull (SQS-like, with Sequin SDKs). Out of the box, you can start triggering side-effects when a new record is created, fan out work to cloud functions, or activate workflows in services like trigger.dev. In this guide, we'll show you how to connect your Neon database to Sequin to start sending changes anywhere you need. ## Prerequisites - A [Sequin account](https://console.sequinstream.com/register) - A [Neon account](https://console.neon.tech/) - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin ## Enable logical replication in Neon Sequin uses the Write Ahead Log (WAL) to capture changes from your Postgres database. In this step, we'll enable logical replication for your Neon Postgres project. **Important**: Enabling logical replication modifies the Postgres `wal_level` configuration parameter, changing it from replica to logical for all databases in your Neon project. Once the `wal_level` setting is changed to logical, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the Neon **Dashboard**, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor): ```sql SHOW wal_level; wal_level ----------- logical ``` ## Connect your Neon database to Sequin After enabling logical replication on Neon, you'll now connect your Neon database to Sequin. Follow these steps: 1. In Neon, copy your database connection string. You can find the it by clicking the **Connect** button on your **Project Dashboard**. It will look similar to this: ```sql postgresql://neondb_owner:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` 2. In the Sequin Console, click on the **Connect Database** button, and then auto-complete your database credentials by clicking the **Autofill with URL** button and pasting in your database connection string. 3. Use the SQL Editor in your Neon project to create a replication slot by executing the following SQL query: ```sql SELECT pg_create_logical_replication_slot('sequin_slot', 'pgoutput'); ``` This creates a replication slot named `sequin_slot`. 4. Create a publication to indicate which tables will publish changes to the replication slot. Run the following SQL command: ```sql CREATE PUBLICATION sequin_pub FOR TABLE table1, table2, table3; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. 5. Back in the Sequin Console, enter the name of the replication slot (`sequin_slot`) and publication (`sequin_pub`) you just created. Then, name your database (e.g. `neondb`) and click **Create Database**. With these steps completed, your Neon database is now connected to Sequin via a replication slot and publication. Sequin is now detecting changes to your tables. ## Create a consumer Set up a consumer in Sequin to stream changes from your database. 1. In the Sequin Console, navigate to the **Consumers** page and click **Create Consumer**. 2. Select the Neon database you just created and then select the specific table you want to process changes for. 3. Define any filters for the changes you want to capture. For example, you might want to only process orders with a value greater than a certain amount, or accounts with a certain status. 4. Choose whether you want your consumer to process [rows or changes](https://sequinstream.com/docs/core-concepts#rows-and-changes): - **Rows**: Captures the latest state of records when a row is inserted or updated. - **Changes**: Captures every `insert`, `update`, and `delete`, including `OLD` values for updates and deletes. 5. Select your preferred method for [receiving changes](https://sequinstream.com/docs/core-concepts#consumption): - **HTTP Push** (Webhooks): Sequin sends changes to your specified endpoint. - **HTTP Pull** (similar to SQS): Your application pulls changes from Sequin. 6. Enter the final details for your consumer: - Give your consumer a name (e.g., `neon-changes-consumer`). - If using HTTP Push, provide the endpoint URL where Sequin should send the changes. You can also provide encrypted headers. - Optionally, set a timeout and add an endpoint path. 7. Click **Create Consumer** to finalize the setup. Your consumer is now created and will start processing changes from your Neon database according to your specified configuration. ## Where to next? You're now using Sequin with Neon to capture and stream changes from your database. From here, you can tailor your implementation for your use case: - Use Sequin to trigger workflows in tools like Inngest or trigger.dev, activate side-effects in your app, setup audit logs, or generate denormalized views. - Tailor your consumer's [filtering](https://sequinstream.com/docs/core-concepts#filtering) and settings to meet your requirements. - Try a [pull consumer](https://sequinstream.com/docs/core-concepts#pull-consumers) with [our SDKs](https://sequinstream.com/docs/sdks) to completely manage how you retrieve changes at scale. --- # Source: https://neon.com/llms/guides-solid-start.txt # Connect a SolidStart application to Neon > The document guides users on integrating a SolidStart application with Neon by detailing the steps to establish a connection, configure environment variables, and manage database interactions within the SolidStart framework. ## Source - [Connect a SolidStart application to Neon HTML](https://neon.com/docs/guides/solid-start): The original HTML version of this documentation SolidStart is an open-source meta-framework designed to integrate the components that make up a web application.1. This guide explains how to connect SolidStart with Neon using a secure server-side request. To create a Neon project and access it from a SolidStart application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a SolidStart project and add dependencies 1. Create a SolidStart project if you do not have one. For instructions, see [Quick Start](https://docs.solidjs.com/solid-start/getting-started), in the SolidStart documentation. 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg ``` Tab: postgres.js ```shell npm install postgres ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client There a multiple ways to make server-side requests with SolidStart. See below for the different implementations. ### Server-Side Data Loading To [load data on the server](https://docs.solidjs.com/solid-start/building-your-application/data-loading#data-loading-always-on-the-server) in SolidStart, add the following code snippet to connect to your Neon database: Tab: node-postgres ```typescript import pg from 'pg'; import { createAsync } from "@solidjs/router"; const getVersion = async () => { "use server"; const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL, }); const client = await pool.connect(); const response = await client.query('SELECT version()'); return response.rows[0].version; } export const route = { load: () => getVersion(), }; export default function Page() { const version = createAsync(() => getVersion()); return <>{version()}; } ``` Tab: postgres.js ```typescript import postgres from 'postgres'; import { createAsync } from "@solidjs/router"; const getVersion = async () => { "use server"; const sql = postgres(import.meta.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return response[0].version; } export const route = { load: () => getVersion(), }; export default function Page() { const version = createAsync(() => getVersion()); return <>{version()}; } ``` Tab: Neon serverless driver ```typescript import { neon } from "@neondatabase/serverless"; import { createAsync } from "@solidjs/router"; const getVersion = async () => { "use server"; const sql = neon(`${process.env.DATABASE_URL}`); const response = await sql`SELECT version()`; const { version } = response[0]; return version; } export const route = { load: () => getVersion(), }; export default function Page() { const version = createAsync(() => getVersion()); return <>{version()}; } ``` ### Server Endpoints (API Routes) In your server endpoints (API Routes) in your SolidStart application, use the following code snippet to connect to your Neon database: Tab: node-postgres ```javascript // File: routes/api/test.ts import { Pool } from 'pg'; const pool = new Pool({ connectionString: import.meta.env.DATABASE_URL, ssl: true, }); export async function GET() { const client = await pool.connect(); let data = {}; try { const { rows } = await client.query('SELECT version()'); data = rows[0]; } finally { client.release(); } return new Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json' } }); } ``` Tab: postgres.js ```javascript // File: routes/api/test.ts import postgres from 'postgres'; export async function GET() { const sql = postgres(import.meta.env.DATABASE_URL, { ssl: 'require' }); const response = await sql`SELECT version()`; return new Response(JSON.stringify(response[0]), { headers: { 'Content-Type': 'application/json' }, }); } ``` Tab: Neon serverless driver ```javascript // File: routes/api/test.ts import { neon } from '@neondatabase/serverless'; export async function GET() { const sql = neon(import.meta.env.DATABASE_URL); const response = await sql`SELECT version()`; return new Response(JSON.stringify(response[0]), { headers: { 'Content-Type': 'application/json' }, }); } ``` ## Run the app When you run `npm run dev` you can expect to see the following on [localhost:3000](localhost:3000): ```shell PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with SolidStart and Neon](https://github.com/neondatabase/examples/tree/main/with-solid-start) --- # Source: https://neon.com/llms/guides-sqlalchemy-migrations.txt # Schema migration with Neon Postgres and SQLAlchemy > The document guides users on performing schema migrations using Neon Postgres and SQLAlchemy, detailing the steps to set up and execute migrations within a Neon database environment. ## Source - [Schema migration with Neon Postgres and SQLAlchemy HTML](https://neon.com/docs/guides/sqlalchemy-migrations): The original HTML version of this documentation [SQLAlchemy](https://www.sqlalchemy.org/) is a popular SQL toolkit and Object-Relational Mapping (ORM) library for Python. SQLAlchemy provides a powerful way to interact with databases and manage database schema changes using [Alembic](https://alembic.sqlalchemy.org/), a lightweight database migration tool. This guide demonstrates how to use SQLAlchemy/Alembic to manage schema migrations for a Neon Postgres database. We create a simple API using the [FastAPI](https://fastapi.tiangolo.com/) web framework and define database models using SQLAlchemy. We then generate and run migrations to manage schema changes over time. ## Prerequisites To follow along with this guide, you will need: - A Neon account. If you do not have one, sign up at [Neon](https://neon.tech). Your Neon project comes with a ready-to-use Postgres database named `neondb`. We'll use this database in the following examples. - [Python](https://www.python.org/) installed on your local machine. We recommend using a newer version of Python, 3.8 or higher. ## Setting up your Neon database ### Initialize a new project 1. Log in to the Neon Console and navigate to the [Projects](https://console.neon.tech/app/projects) section. 2. Select a project or click the **New Project** button to create a new one. ### Retrieve your Neon database connection string You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Keep your connection string handy for later use. **Note**: Neon supports both direct and pooled database connection strings. You can find a connection string for your database by clicking the **Connect** button on your **Project Dashboard**. A pooled connection string connects your application to the database via a PgBouncer connection pool, allowing for a higher number of concurrent connections. However, using a pooled connection string for migrations can be prone to errors. For this reason, we recommend using a direct (non-pooled) connection when performing migrations. For more information about direct and pooled connections, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## Setting up the Web application ### Set up the Python environment To manage our project dependencies, we create a new Python virtual environment. Run the following commands in your terminal to set it up. ```bash python -m venv myenv ``` Activate the virtual environment by running the following command: ```bash # On macOS and Linux source myenv/bin/activate # On Windows myenv\Scripts\activate ``` With the virtual environment activated, we can create a new directory for our FastAPI project and install the required packages: ```bash mkdir guide-neon-sqlalchemy && cd guide-neon-sqlalchemy pip install sqlalchemy alembic "psycopg2-binary" pip install fastapi uvicorn python-dotenv pip freeze > requirements.txt ``` We installed SQLAlchemy, Alembic, and the `psycopg2-binary` package to connect to the Neon Postgres database. We the installed the `FastAPI` package to create the API endpoints and `uvicorn` as the web server. We then saved the installed packages to a `requirements.txt` file so the project can be easily recreated in another environment. ### Set up the Database configuration Create a `.env` file in the project root directory and add the `DATABASE_URL` environment variable to it. Use the connection string that you obtained from the Neon Console earlier: ```bash # .env DATABASE_URL=NEON_POSTGRES_CONNECTION_STRING ``` We create an `app` directory at the project root to store the database models and configuration files. ```bash mkdir app touch guide-neon-sqlalchemy/app/__init__.py ``` Next, create a new file named `database.py` in the `app` subdirectory and add the following code: ```python # app/database.py import os import dotenv from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker dotenv.load_dotenv() SQLALCHEMY_DATABASE_URL = os.getenv("DATABASE_URL") engine = create_engine(SQLALCHEMY_DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() ``` This code sets up the database connection using SQLAlchemy. It reads the `DATABASE_URL` environment variable, creates a database engine, and defines a `SessionLocal` class for database sessions. The `Base` class is used as a base class for defining database models. ## Defining data models and running migrations ### Specify the data model Create a new file named `models.py` in the `app` subdirectory and define the database models for your application: ```python # app/models.py from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey from sqlalchemy.orm import relationship from sqlalchemy.sql import func from .database import Base class Author(Base): __tablename__ = "authors" id = Column(Integer, primary_key=True, index=True) name = Column(String(100), nullable=False) bio = Column(Text) created_at = Column(DateTime(timezone=True), server_default=func.now()) books = relationship("Book", back_populates="author") class Book(Base): __tablename__ = "books" id = Column(Integer, primary_key=True, index=True) title = Column(String(200), nullable=False) author_id = Column(Integer, ForeignKey("authors.id"), nullable=False) created_at = Column(DateTime(timezone=True), server_default=func.now()) author = relationship("Author", back_populates="books") ``` This code defines two models: `Author` and `Book`. The `Author` model represents an author with fields for `name`, `bio`, and a `created_at` timestamp. The `Book` model represents a book with fields for `title`, `author` (as a foreign key to the `Author` model), and a `created_at` timestamp. The `relationship` function is used to define the one-to-many relationship between `Author` and `Book`. ### Initialize Alembic To initialize Alembic for managing database migrations, run the following command in your terminal: ```bash alembic init alembic ``` This command creates a new directory named `alembic` with the necessary files for managing migrations. Open the `env.py` file in the `alembic` directory and update the `target_metadata` variable to include the models defined in the `models.py` file: ```python # alembic/env.py from app.models import Base target_metadata = Base.metadata ``` We update the `alembic/env.py` file again to load the database URL from the `.env` file at project root and set it as the `sqlalchemy.url` configuration option. ```python # alembic/env.py import dotenv import os dotenv.load_dotenv() config.set_main_option('sqlalchemy.url', os.getenv('DATABASE_URL', "")) ``` ### Generate the initial migration To generate the initial migration based on the defined models, run the following command: ```bash alembic revision --autogenerate -m "init-setup" ``` This command detects the `Author` and `Book` models and generates a new migration file in the `alembic/versions` directory. ### Apply the migration To apply the migration and create the corresponding tables in the Neon Postgres database, run the following command: ```bash alembic upgrade head ``` This command executes the migration file and creates the necessary tables in the database. ### Seed the database To seed the database with some initial data, create a new file named `seed.py` in the project root and add the following code: ```python # seed.py from database import SessionLocal from models import Author, Book def seed_data(): db = SessionLocal() # Create authors authors = [ Author( name="J.R.R. Tolkien", bio="The creator of Middle-earth and author of The Lord of the Rings." ), Author( name="George R.R. Martin", bio="The author of the epic fantasy series A Song of Ice and Fire." ), Author( name="J.K. Rowling", bio="The creator of the Harry Potter series." ), ] db.add_all(authors) db.commit() # Create books books = [ Book(title="The Fellowship of the Ring", author=authors[0]), Book(title="The Two Towers", author=authors[0]), Book(title="The Return of the King", author=authors[0]), Book(title="A Game of Thrones", author=authors[1]), Book(title="A Clash of Kings", author=authors[1]), Book(title="Harry Potter and the Philosopher's Stone", author=authors[2]), Book(title="Harry Potter and the Chamber of Secrets", author=authors[2]), ] db.add_all(books) db.commit() print("Data seeded successfully.") if __name__ == "__main__": seed_data() ``` Now, run the `seed.py` script to seed the database with the initial data: ```bash python seed.py ``` ## Implement the web application ### Create API endpoints Create a file named `main.py` in the project root directory and define the FastAPI application with endpoints for interacting with authors and books: ```python # main.py from fastapi import FastAPI, Depends from sqlalchemy.orm import Session import uvicorn from app.models import Author, Book, Base from app.database import SessionLocal, engine Base.metadata.create_all(bind=engine) app = FastAPI() def get_db(): db = SessionLocal() try: yield db finally: db.close() @app.get("/authors/") def read_authors(db: Session = Depends(get_db)): authors = db.query(Author).all() return authors @app.get("/books/{author_id}") def read_books(author_id: int, db: Session = Depends(get_db)): books = db.query(Book).filter(Book.author_id == author_id).all() return books if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=8000) ``` This code defines endpoints for creating and retrieving authors and books. It uses SQLAlchemy's `Session` to interact with the database and Pydantic models (`schemas`) for request and response data validation and serialization. ### Run the FastAPI server To start the FastAPI server using `uvicorn` and test the application, run the following command: ```bash python main.py ``` Now, you can navigate to `http://localhost:8000/authors` in your browser to view the list of authors. To view the books by a specific author, navigate to `http://localhost:8000/books/{author_id}` where `{author_id}` is the ID of the author. ## Applying schema changes Let's demonstrate how to handle schema changes by adding a new field `country` to the `Author` model, to store the author's country of origin. ### Update the data model Open the `models.py` file and add a new field to the `Author` model: ```python # models.py class Author(Base): __tablename__ = "authors" id = Column(Integer, primary_key=True, index=True) name = Column(String(100), nullable=False) bio = Column(Text) country = Column(String(100)) created_at = Column(DateTime(timezone=True), server_default=func.now()) books = relationship("Book", back_populates="author") ``` ### Generate and run the migration To generate a new migration file for the schema change, run the following command: ```bash alembic revision --autogenerate -m "add-country-to-author" ``` This command detects the updated `Author` model and generates a new migration file to add the new field to the corresponding table in the database. Now, to apply the migration, run the following command: ```bash alembic upgrade head ``` ### Test the schema change Restart the FastAPI development server. ```bash python main.py ``` Navigate to `http://localhost:8000/authors` in your browser to view the list of authors. You should see the new `country` field included in each author's record, reflecting the schema change. ## Conclusion In this guide, we demonstrated how to set up a FastAPI project with `Neon` Postgres, define database models using SQLAlchemy, generate migrations using Alembic, and run them. Alembic makes it easy to interact with the database and manage schema evolution over time. ## Source code You can find the source code for the application described in this guide on GitHub. - [Migrations with Neon and SQLAlchemy](https://github.com/neondatabase/guide-neon-sqlalchemy): Run migrations in a Neon-SQLAlchemy project ## Resources For more information on the tools and concepts used in this guide, refer to the following resources: - [FastAPI Documentation](https://fastapi.tiangolo.com/) - [SQLAlchemy Documentation](https://docs.sqlalchemy.org/) - [Alembic Documentation](https://alembic.sqlalchemy.org/) - [Neon Postgres](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/guides-sqlalchemy.txt # Connect an SQLAlchemy application to Neon > This document guides users on connecting an SQLAlchemy application to Neon by detailing the necessary configuration steps and code examples for establishing a database connection. ## Source - [Connect an SQLAlchemy application to Neon HTML](https://neon.com/docs/guides/sqlalchemy): The original HTML version of this documentation SQLAlchemy is a Python SQL toolkit and Object Relational Mapper (ORM) that provides application developers with the full power and flexibility of SQL. This guide describes how to create a Neon project and connect to it from SQLAlchemy. **Prerequisites:** To complete the steps in this topic, ensure that you have an SQLAlchemy installation with a Postgres driver. The following instructions use `psycopg2`, the default driver for Postgres in SQLAlchemy. For SQLAlchemy installation instructions, refer to the [SQLAlchemy Installation Guide](https://docs.sqlalchemy.org/en/14/intro.html#installation). `psycopg2` installation instructions are provided below. To connect to Neon from SQLAlchemy: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details, including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Install psycopg2 Psycopg2 is a popular python library for running raw Postgres queries. For most operating systems, the quickest installation method is using the PIP package manager. For example: ```shell pip install psycopg2-binary ``` For additional information about installing `psycopg2`, refer to the [psycopg2 installation documentation](https://www.psycopg.org/docs/install.html). ## Create the "hello neon" program ```python import psycopg2 # Optional: tell psycopg2 to cancel the query on Ctrl-C import psycopg2.extras; psycopg2.extensions.set_wait_callback(psycopg2.extras.wait_select) # You can set the password to None if it is specified in a ~/.pgpass file USERNAME = "alex" PASSWORD = "AbC123dEf" HOST = "@ep-cool-darkness-123456.us-east-2.aws.neon.tech" PORT = "5432" PROJECT = "dbname" conn_str = f"dbname={PROJECT} user={USERNAME} password={PASSWORD} host={HOST} port={PORT} sslmode=require&channel_binding=require" conn = psycopg2.connect(conn_str) with conn.cursor() as cur: cur.execute("SELECT 'hello neon';") print(cur.fetchall()) ``` You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). **Note**: This example was tested with Python 3 and psycopg2 version 2.9.3. ## Create an SQLAlchemy engine for your Neon project SQLAlchemy uses engine abstraction to manage database connections and exposes a `create_engine` function as the primary endpoint for engine initialization. The following example creates an SQLAlchemy engine that points to your Neon branch: ```python from sqlalchemy import create_engine USERNAME = "alex" PASSWORD = "AbC123dEf" HOST = "ep-cool-darkness-123456.us-east-2.aws.neon.tech" DATABASE = "dbname" conn_str = f'postgresql://{USERNAME}:{PASSWORD}@{HOST}/{DATABASE}?sslmode=require&channel_binding=require' engine = create_engine(conn_str) ``` You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). For additional information about connecting from SQLAlchemy, refer to the following topics in the SQLAlchemy documentation: - [Establishing Connectivity - the Engine](https://docs.sqlalchemy.org/en/14/tutorial/engine.html) - [Connecting to PostgreSQL with SQLAlchemy](https://docs.sqlalchemy.org/en/14/core/engines.html#postgresql) ## SQLAlchemy connection errors - SQLAlchemy versions prior to 2.0.33 may reuse idle connections, leading to connection errors. If this occurs, you could encounter an `SSL connection has been closed unexpectedly` error. To resolve this, upgrade to SQLAlchemy 2.0.33 or later. For more details, see the [SQLAlchemy 2.0.33 changelog](https://docs.sqlalchemy.org/en/20/changelog/changelog_20.html#change-2.0.33-postgresql). - If you encounter an `SSL SYSCALL error: EOF detected` when connecting to the database, this typically happens because the application is trying to reuse a connection after the Neon compute has been suspended due to inactivity. To resolve this issue, try one of the following options: - Set the SQLAlchemy `pool_recycle` parameter to a value less than or equal to the scale to zero setting configured for your compute. - Set the SQLAlchemy `pool_pre_ping` parameter to `true`. This ensures that your engine checks if the connection is alive before executing a query. For more details on the `pool_recycle` and `pool_pre_ping` parameters, refer to [SQLAlchemy: Connection Pool Configuration](https://docs.sqlalchemy.org/en/20/core/pooling.html#connection-pool-configuration) and [Dealing with Disconnects](https://docs.sqlalchemy.org/en/20/core/pooling.html#connection-pool-configuration). For information on configuring Neon's scale to zero setting, see [Configuring Scale to Zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). ## Schema migration with SQLAlchemy For schema migration with SQLAlchemy, see our guide: - [SQLAlchemy Migrations](https://neon.com/docs/guides/sqlalchemy-migrations): Schema migration with Neon Postgres and SQLAlchemy --- # Source: https://neon.com/llms/guides-stepzen.txt # Use StepZen with Neon > The document outlines the integration process of StepZen with Neon, detailing the steps to connect a Neon database to a StepZen GraphQL API for streamlined data querying and management. ## Source - [Use StepZen with Neon HTML](https://neon.com/docs/guides/stepzen): The original HTML version of this documentation _This guide was contributed by Roy Derks from StepZen_ GraphQL has been around for years and is becoming increasingly popular among web developers. It is a query language for APIs and a runtime for fulfilling queries with your existing data. GraphQL allows clients to access data flexibly and efficiently. However, building a GraphQL API often requires writing a lot of code and familiarizing yourself with a new framework. This guide shows how you can generate a GraphQL API for your Neon database in minutes using [StepZen](https://stepzen.com/). Why use Neon and StepZen together? Neon is serverless Postgres. Neon separates storage and compute to offer modern developer features such as scale-to-zero and database branching. With Neon, you can be up and running with a Postgres database in just a few clicks, and you can easily create and manage your database in the Neon Console and connect to it using [psql](https://neon.com/docs/connect/query-with-psql-editor) or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). What if you want to let clients consume your data through an API in a way that is both flexible and efficient? That's where StepZen comes in. StepZen is a GraphQL API platform that lets you build a GraphQL API for your Neon database in minutes. Just like Neon, it's serverless and offers a generous free plan. ## Set up Neon Before generating a GraphQL API, you must set up a Neon database, which you can do it in a few steps: 1. Sign in to Neon, or [sign up](https://neon.com/docs/get-started/signing-up) if you do not yet have an account. 2. Select a Neon project. If you do not have one, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). 3. [Create a database](https://neon.com/docs/manage/databases#create-a-database) or use the ready-to-use `dbname` database. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Using the connection string, you can seed the database with the data from the `init.sql` file, which you can find [here](https://github.com/stepzen-dev/examples/blob/main/with-neon/init.sql). Running the `init.sql` file creates the `address`, `customer`, `product`, and `order` tables and populates them with the data. It also creates tables that connect the `customer` table with the `address` table, and the `order` table with the `product` table. You can seed the database directly from the terminal by running the following `psql` command: ```bash psql postgresql://[user]:[password]@[neon_hostname]/[dbname] < init.sql ``` The command takes a Neon connection string as the first argument and a file as the second argument. In the terminal, you can see that the tables are created and populated with the data. You can also view the tables and data from the **Tables** page in the Neon Console. Next, you will connect StepZen to the Neon database and use it to generate a GraphQL schema for the database. ## Connect StepZen to Neon To generate a GraphQL schema for the data in your Neon database, you need to connect StepZen to Neon. This can be done manually or by using the StepZen CLI. The StepZen CLI can be installed with `npm` (or Yarn), and it must be installed globally: ```bash npm install -g stepzen ``` After you install the CLI, create a StepZen account. You can do this by navigating to [https://stepzen.com/](https://stepzen.com) and clicking the **Start for Free** button. To link your StepZen account to the CLI, log in using the following command: ```bash stepzen login ``` **Note**: You can also use StepZen without creating an account. The difference is that you will have a public account, which means that your schema will be public, and everyone with the link can query data from your database. For more information, refer to the [StepZen documentation](https://stepzen.com/docs/quick-start/install-and-setup). Next, create a local directory for your StepZen workspace and navigate to the directory. For example: ```bash mkdir stpezen cd stepzen ``` Specify your data source with the `stepzen import` CLI. Answer the setup questions as shown below. ```bash stepzen import postgresql ? What would you like your endpoint to be called? api/with-neon ? What is your host? YOUR_NEON_HOST:5432 (e.g., `ep-cool-darkness-123456.us-east-2.aws.neon.tech:5432`) ? What is your database name? YOUR_NEON_DATABASE (e.g., `dbname`) ? What is the username? YOUR_NEON_USERNAME (e.g., `alex`) ? What is the password? [hidden] YOUR_NEON_PASSWORD ? Automatically link types based on foreign key relationships using @materializer (https://stepzen.com/docs/features/linking-types) Yes ? What is your database schema (leave blank to use defaults)? Starting... done Successfully imported schema postgresql from StepZen ``` The CLI has now created a GraphQL schema based on the tables and data in your Neon database. You can find the schema in the `stepzen` folder at the root of your project. The schema is generated in the `postgresql/index.graphql` file. **Note**: The **Automatically link types based on foreign key relationships using @materializer** step is essential, as it automatically links the tables based on foreign key relationships, which allows you to query data from the `customer` table and get related data from the `address` table. The `config.yaml` file stores connection details for the Neon database. The StepZen CLI uses this file to connect to the Neon database. But you need to make two changes to the file: ```bash configurationset: - configuration: name: postgresql_config uri: YOUR_NEON_DSN?user=YOUR_NEON_USERNAME&password=YOUR_NEON_PASSWORD&options=project=YOUR_NEON_PROJECT_ID&sslmode=require&channel_binding=require ``` As shown above, you need to append `&options=project=YOUR_NEON_PROJECT_ID` to the `uri` connection string. This is needed to establish a secure connection to the Neon database. The `project` option is the ID of the project in Neon. You can find the project ID in the Neon Console under **Settings** or in the URL of your project. The next section explores the GraphQL API to see how the connection between the Neon Postgres database and StepZen works. ## Explore the GraphQL API The GraphQL schema that StepZen generates still needs to be deployed to the cloud before you are able to explore the GraphQL API. With StepZen, you have multiple options to deploy your schema. You can deploy it to the StepZen cloud or run it locally using Docker. This guide uses the StepZen cloud, which the fastest way to get started. To deploy the schema to the StepZen cloud, run the following command: ```bash stepzen start ``` After the schema is deployed, you can explore the GraphQL API in the [StepZen dashboard](https://dashboard.stepzen.com/explorer). From the dashboard, you can view the GraphQL schema, try out queries and mutations, and generate code snippets for your favorite programming language. The CLI also outputs the URL of your GraphQL API endpoint. You can use this endpoint to query your API from other tools or applications. It's time to start querying the GraphQL API. Start by querying the `customer` table. You can do this by writing the following query on the left-hand side of the dashboard: ```graphql { getCustomerList { name email } } ``` The GraphQL API will retrieve the `name` and `email` fields from the `customer` table. The result looks like this: ```json { "data": { "getCustomerList": [ { "name": "Lucas Bill", "email": "lucas.bill@example.com" }, { // ... } ] } } ``` In GraphQL, the result has the same shape as the query (or other operation) you used to retrieve it. The GraphQL API will only retrieve the fields from the database that are present in the query. The query sent to the Neon database has the following shape: ```sql SELECT name, email FROM public.customer ``` The following section dives deeper into the GraphQL API, showing how GraphQL API queries are translated to SQL. ## From GraphQL query to SQL You have explored the GraphQL API, learning how to query data from the Neon database. But how does this work? How is a GraphQL query translated to an SQL query that runs on your Neon database? In the previous example, StepZen only requests the fields in the query, improving the GraphQL API's performance. Requesting all fields from the database makes no sense if only a few are requested. Below, you can see a snippet of the `getCustomerList` query in the `postgresql/index.graphql` file: ```graphql type Query { getCustomerList: [Customer] @dbquery( type: "postgresql" schema: "public" table: "customer" configuration: "postgresql_config" ) } ``` The `getCustomerList` query defined in the GraphQL schema returns an array of the type `Customer`. - The `@dbquery` directive identifies the query as a database query - `type` defines the type of database - `schema` defines the schema - `table` defines the table in the database - `configuration` defines the name of the connection configuration used to connect to the database Earlier, the CLI created connections based on foreign key relationships. For example, the `order` table has a foreign key relationship with the `customer` table. This means that you can query data from the `order` table, and get the related data from the `customer` table. You can query the customer linked to an order like this: ```graphql { getOrderList { id shippingcost customer { name email } } } ``` In addition to the `id` and `shippingcost` fields, the `name` and `email` fields are requested from the `customer` table. So how does the query get the `customer` field? The `getOrderList` query is defined in the GraphQL schema, and returns a list of the type `Order` with a field called `customerid`. This relationship is defined as a foreign key in the database and the GraphQL schema has a field called `customer`, which is linked to the `customerid` field. ```graphql type Order { carrier: String createdat: Date! customer: Customer @materializer(query: "getCustomer", arguments: [{ name: "id", field: "customerid" }]) customerid: Int! id: Int! lineitemList: [Lineitem] @materializer(query: "getLineitemUsingOrderid") shippingcost: Float trackingid: String } ``` The `@materializer` directive links the `customer` field to the `customerid` field. The `query` argument is the name of the query that retrieves the data, which in this case is `getCustomer`. The `arguments` argument is an array of objects that define the arguments passed to the query. In this case, the `id` argument is passed to the `getCustomer` query, and the value of the `id` argument is the value of the `customerid` field. When you retrieve a list of orders from the database, you can include the `customer` field for each order. StepZen then executes the `getCustomer` query with the `id` argument set to the value of the `customerid` field. ```graphql type Query { getCustomer(id: Int!): Customer @dbquery( type: "postgresql" schema: "public" table: "customer" configuration: "postgresql_config" ) } ``` This GraphQL query is translated to the following SQL query, which is run on the Neon Postgres database. ```sql SELECT name, email FROM public.customer WHERE id = $1 ``` And together with the previous query, it is translated to the following SQL query for the Neon Postgres database: ```sql SELECT id, shippingcost, customerid FROM public.order SELECT name, email FROM public.customer WHERE id = $1 ``` StepZen reuses SQL queries or merges queries when possible to retrieve data from the Neon database more efficiently. For example, if you request the `customer` field for multiple orders, StepZen only executes the `getCustomer` query once for every recurring value of `customerid`. **Note**: In addition to having StepZen generate the query that is sent to the Neon database, you can also define a raw query in the GraphQL schema. Defining a raw query is useful when you want to query data from multiple tables or when you want to use a more complex query. You can find an example in the `getOrderUsingCustomerid` query in the `postgresql/index.graphql` file. ## Conclusion In this guide, you have learned how to generate a GraphQL API from a Neon database. You have used StepZen, which offers GraphQL-as-a-Service and a CLI to generate GraphQL APIs from data sources such as databases and REST APIs. Using StepZen, you can quickly generate a GraphQL API from a Neon database and use it to query data from the database. You also looked at how StepZen translates queries to the GraphQL API into SQL queries that run on your Neon database. You can find the complete code example [here](https://github.com/stepzen-dev/examples). --- # Source: https://neon.com/llms/guides-sveltekit.txt # Connect a Sveltekit application to Neon > The document guides users on integrating a SvelteKit application with Neon by detailing the steps to configure the database connection, manage environment variables, and execute queries within the SvelteKit framework. ## Source - [Connect a Sveltekit application to Neon HTML](https://neon.com/docs/guides/sveltekit): The original HTML version of this documentation Sveltekit is a modern JavaScript framework that compiles your code to tiny, framework-less vanilla JS. This guide explains how to connect Sveltekit with Neon using a secure server-side request. To create a Neon project and access it from a Sveltekit application: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a Sveltekit project and add dependencies 1. Create a Sveltekit project using the following commands: ```shell npx sv create my-app --template minimal --no-add-ons --types ts cd my-app ``` 2. Add project dependencies using one of the following commands: Tab: node-postgres ```shell npm install pg dotenv ``` Tab: postgres.js ```shell npm install postgres dotenv ``` Tab: Neon serverless driver ```shell npm install @neondatabase/serverless dotenv ``` ## Store your Neon credentials Add a `.env` file to your project directory and add your Neon connection string to it. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ```shell DATABASE_URL="postgresql://:@.neon.tech:/?sslmode=require&channel_binding=require" ``` ## Configure the Postgres client There are two parts to connecting a SvelteKit application to Neon. The first is `db.server.ts`, which contains the database configuration. The second is the server-side route where the connection to the database will be used. ### db.server Create a `db.server.ts` file at the root of your `/src` directory and add the following code snippet to connect to your Neon database: Tab: node-postgres ```typescript import 'dotenv/config'; import pg from 'pg'; const connectionString: string = process.env.DATABASE_URL as string; const pool = new pg.Pool({ connectionString, ssl: true, }); export { pool }; ``` Tab: postgres.js ```typescript import 'dotenv/config'; import postgres from 'postgres'; const connectionString: string = process.env.DATABASE_URL as string; const sql = postgres(connectionString, { ssl: 'require' }); export { sql }; ``` Tab: Neon serverless driver ```typescript import 'dotenv/config'; import { neon } from '@neondatabase/serverless'; const connectionString: string = process.env.DATABASE_URL as string; const sql = neon(connectionString); export { sql }; ``` ### route Create a `+page.server.ts` file in your route directory and import the database configuration: Tab: node-postgres ```typescript import { pool } from '../db.server'; export async function load() { const client = await pool.connect(); try { const { rows } = await client.query('SELECT version()'); const { version } = rows[0]; return { version, }; } finally { client.release(); } } ``` Tab: postgres.js ```typescript import { sql } from '../db.server'; export async function load() { const response = await sql`SELECT version()`; const { version } = response[0]; return { version, }; } ``` Tab: Neon serverless driver ```typescript import { sql } from '../db.server'; export async function load() { const response = await sql`SELECT version()`; const { version } = response[0]; return { version, }; } ``` ### Page Component Create a `+page.svelte` file to display the data: ```svelte

    Database Version

    {data.version}

    ``` ## Run the app When you run `npm run dev` you can expect to see the following on [localhost:5173](localhost:5173): ```shell Database Version PostgreSQL 17.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit ``` ## Source code You can find the source code for the application described in this guide on GitHub. - [Get started with Sveltekit and Neon](https://github.com/neondatabase/examples/tree/main/with-sveltekit) --- # Source: https://neon.com/llms/guides-symfony.txt # Connect from Symfony with Doctrine to Neon > This document guides users on configuring Symfony with Doctrine to connect to a Neon database, detailing the necessary steps and configurations for seamless integration. ## Source - [Connect from Symfony with Doctrine to Neon HTML](https://neon.com/docs/guides/symfony): The original HTML version of this documentation Symfony is a free and open-source PHP web application framework. Symfony uses the Doctrine library for database access. Connecting to Neon from Symfony with Doctrine is the same as connecting to a standalone Postgres installation from Symfony with Doctrine. Only the connection details differ. To connect to Neon from Symfony with Doctrine: ## Create a Neon project If you do not have one already, create a Neon project. Save your connection details including your password. They are required when defining connection settings. 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Configure the connection In your `.env` file, set the `DATABASE_URL` to the Neon project connection string that you copied in the previous step. ```shell DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?charset=utf8&sslmode=require&channel_binding=require" ``` You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). --- # Source: https://neon.com/llms/guides-tables.txt # Managing your data and schemas in the Neon Console > The document outlines how to manage data and schemas within the Neon Console, detailing steps for creating, modifying, and deleting tables, as well as managing indexes and constraints specific to Neon's database environment. ## Source - [Managing your data and schemas in the Neon Console HTML](https://neon.com/docs/guides/tables): The original HTML version of this documentation The **Tables** page in the Neon Console offers a dynamic, visual interface for managing data and schemas. Fully interactive, this view lets you add, update, and delete records, filter data, modify columns, drop or truncate tables, export data in both .json and .csv formats, and manage schemas, tables, views, and enums. **Note**: The **Tables** page is powered by a Drizzle Studio integration. For tracking updates, see [Tables page enhancements and updates](https://neon.com/docs/guides/tables#tables-page-enhancements-and-updates). ## Edit records Edit individual entries directly within the table interface. Click on a cell to modify its contents. You don't have to press `Enter` (though you can). Just move your cursor to the next cell you want to modify. Click `Save x changes` when you're done. ## Add records Add new records to your tables using the **Add record** button. A couple of things to note: - You need to hit `Enter` for your input to register. When editing existing fields, you don't have to do this. But for new fields, if you tab to the next cell, you'll lose your input. - You can leave `DEFAULT` fields untouched and the cell will inherit the right value based on your schema definition. For example, defaults for boolean fields are automatically applied when you click `Save changes`. ## Toggle columns You can simplify your view by hiding (or showing) individual columns in the table. You're not modifying content here; deselect a checked column to hide it, and re-select the column to show it again. Your selections are saved as a persistent filter. ## Add filters Filters let you store simplified views of your data that you can come back to later. You can use dropdown-filtering to select columns, conditions, and input text for the filter. Each new filter is added as a **View** under your list of Tables. ## Delete records Use the checkboxes to mark any unwanted records for deletion, or use the select-all checkbox for bulk deletion. Click `Delete x records` to complete the process. ## Export data You can also use the checkboxes to mark records for export. Select the records you want to include in your export, then choose `Export selected...` from the export dropdown. Or just choose `Export all...` to download the entire contents of the table. You can export to either JSON or CSV. ## Manage schemas In addition to managing data, you can manage your database schema directly from the **Tables** page. Schema management options include: - Creating, altering, and dropping schemas - Creating and altering tables - Creating and altering views - Creating enums - Refreshing the database schema ## Create Postgres roles You can create Postgres roles from the **Tables** page. Define a role name, select from a list of commonly granted privileges, set a password, and click **Review and Create**. > Neon role and privilege limitations apply. See [Manage roles](https://neon.com/docs/manage/roles). ## Add privileges For more advanced privilege assignments, click the **Add privilege** link when creating a role to build your `GRANT` statements. > Neon role and privilege limitations apply. See [Manage roles](https://neon.com/docs/manage/roles). ## Define RLS policies Create Postgres RLS policies using the templates provided. Templates like "based on user_id" restrict each user to their own rows. When using the Data API, access is matched to the `auth.user_id()` function. ### Database studio view The **Database studio** view makes it easy to explore your database objects—including schemas, tables, views, roles, and policies—all in one place. To open the view, select **Database studio** from the **Tables** page: Use the top navbar to navigate: ## Tables page updates The **Tables** page in the Neon Console is powered by a Drizzle Studio integration. You can check the Drizzle Studio integration version in your browser by inspecting the Tables page. For example, in Chrome, right-click, select **Inspect**, and go to the **Console** tab to view the current `Tables version`. You can cross-reference this version with the [Neon Drizzle Studio Integration Changelog](https://github.com/neondatabase/neon-drizzle-studio-changelog/blob/main/CHANGELOG.md) to track updates. ## Reporting errors If you see an error message on the **Tables** page, this could be due to a DNS resolution issue. Please refer to [DNS resolution issues](https://neon.com/docs/connect/connection-errors#dns-resolution-issues) for workarounds. If it's not a DNS resolution issue, other troubleshooting steps you can try include: - **Refreshing the page** — This can resolve temporary glitches. - **Clearing browser cache** — Cached files might cause issues, so clearing the cache could help. - **Disabling browser extensions** — Extensions may interfere with the page's functionality. - **Using a different browser or device** — Check if the issue occurs on another browser or device. - **Trying incognito mode** — Using an incognito window can help bypass issues related to cookies or extensions. If the issue persists, please follow these steps to report the error: 1. [Open a support ticket](https://console.neon.tech/app/projects?modal=support) and provide a detailed description of what you were doing when the error occurred. Please include any screen captures or files that will help us reproduce the issue. We'll work with our partners at Drizzle to investigate and resolve the issue. 2. If you're on the Free plan, you can report the issue on [Discord](https://discord.gg/92vNTzKDGp). --- # Source: https://neon.com/llms/guides-time-travel-assist.txt # Time Travel > The "Time Travel" documentation for Neon details how users can access historical data states by utilizing the time travel feature, enabling precise data retrieval from specific past points. ## Source - [Time Travel HTML](https://neon.com/docs/guides/time-travel-assist): The original HTML version of this documentation To help review your data's history, Time Travel lets you connect to any selected point in time within your restore window and then run queries against that connection. This capability is part of Neon's instant restore feature, which maintains a history of changes through Write-Ahead Log (WAL) records. You can use Time Travel from two places in the Neon Console, and from the Neon CLI: - **SQL Editor** — Time Travel is built into the SQL editor letting you switch between queries of your current data and previous iterations of your data in the same view. - **Restore** — Time Travel Assist is also built into the instant restore flow where it can help you make sure you've targeted the correct restore point before you restore a branch. - **Neon CLI** — Use the Neon CLI to quickly establish point-in-time connections for automated scripts or command-line-based data analysis. ## How Time Travel works Time Travel leverages Neon's instant branching capability to create a temporary branch and compute at the selected point in time, which are automatically removed once you are done querying against this point-in-time connection. The computes are ephemeral: they are not listed on the **Branches** page or in a CLI or API list branches request. However, you can see the history of operations related to the creation and deletion of branches and ephemeral computes on the **Operations** page: - start_compute - create_branch - delete_timeline - suspend_compute ### How long do ephemeral endpoints remain active The ephemeral endpoints are created with a .50 CU compute size, which has 0.50 vCPU size with 2 GB of RAM. An ephemeral compute remains active for as long as you keep running queries against it. After 30 seconds of inactivity, the timeline is deleted and the endpoint is removed. ### Restore window You are only able to run Time Travel queries that fall within your restore window. You cannot select a time outside your current restore window. To change your restore window, see [Configure restore window](https://neon.com/docs/manage/projects#configure-restore-window). ### Data integrity Time Travel only allows non-destructive read-only queries. You cannot alter historical data in any way. If you try to run any query that could alter historical data, you will get an error message like the following: ### Time Travel with the SQL Editor Time Travel in the SQL Editor offers a non-destructive way to explore your database's historical data through read-only queries. By toggling Time Travel in the editor, you switch from querying your current data to querying against a selected point within your restore window. You can use this feature to help with scenarios like: - Investigating anomolies - Assessing the impact of new features - Troubleshooting - Compliance auditing Here's an example of a completed Time Travel query. ### Time Travel Assist with instant restore Time Travel Assist is also available from the **Restore** page, as part of the [Instant restore](https://neon.com/docs/guides/branch-restore) feature. Before completing a restore operation, it's a good idea to use Time Travel Assist to verify that you've targetted the correct restore point. An SQL editor is built into the **Restore** page for this purpose. When you make your branch and timestamp selection to restore a branch, this selection can also be used as the point-in-time connection to query against. Here is an example of a completed query: ## How to use Time Travel Here is how to use Time Travel from both the **SQL Editor** and from the **Restore** page: Tab: SQL Editor 1. In the Neon Console, open the **SQL Editor**. 1. Use the **Time Travel** (🕣) icon to enable querying against an earlier point in time. 1. Use the Date & Time selector to choose a point within your restore window. 1. Write your read-only query in the editor, then click **Run**. You don't have to include time parameters in the query; the query is automatically targeted to your selected timestamp. Tab: Instant restore 1. In the Neon Console, go to **Restore**. 1. Select the branch you want to query against, then select a timestamp, the same as you would to [Restore a branch](https://neon.com/docs/guides/time-travel-assist#restore-a-branch-to-an-earlier-state). This makes the selection for Time Travel Assist. Notice the updated fields above the SQL Editor show the **branch** and **timestamp** you just selected. 1. Check that you have the right database selected to run your query against. Use the database selector under the SQL Editor to switch to a different database for querying against. 1. Write your read-only query in the editor, then click **Query at timestamp** to run the query. You don't have to include time parameters in the query; the query is automatically targeted to your selected timestamp. If your query is successful, you will see a table of results under the editor. Tab: CLI Using the Neon CLI, you can establish a connection to a specific point in your branch's history. To get the connection string, use the following command: ```bash neon connection-string @ ``` In the `branch` field, specify the name of the branch you want to connect to. Omit the `branch` field to connect to your default branch. Replace the `timestamp|LSN` field with the specific timestamp (in RFC 3339 format) or Log Sequence Number for the point in time you want to access. Example: ```bash neon connetion-string main@2024-04-21T00:00:00Z postgresql://alex:AbC123dEf@br-broad-mouse-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require&options=neon_timestamp%3A2024-04-21T00%3A00%3A00Z ``` ### Connect directly with psql Appending `--psql` to the command for a one-step psql connection. For example, to connect to `main` at its state on Jan 1st, 2024: ```bash neon connection-string main@2024-01-01T00:00:00Z --psql ``` Here is the same command using aliases: ```bash neon cs main@2024-01-01T00:00:00Z --psql ``` ### Query at Specific LSNs For more granular control, you can also establish the connection using a specific LSN. Example: ```bash neon cs main@0/234235 ``` This retrieves the connection string for querying the 'main' branch at a specific Log Sequence Number, providing access to the exact state of the database at that point in the transaction log. ### Include project ID for multiple projects If you are working with multiple Neon projects, specify the project ID to target the correct project: ```bash neon connection-string @ --project-id ``` Example: ```bash neon cs main@2024-01-01T00:00:00Z --project-id noisy-pond-12345678 ``` Alternatively, you can set a durable project context that remains active until you remove or change the context: ```bash neon set-context --project-id ``` Read more about getting connection strings from the CLI in [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string), and more about setting contexts in [CLI - set-context](https://neon.com/docs/reference/cli-set-context). ## Billing considerations The ephemeral endpoints used to run your Time Travel queries do contribute to your consumption usage totals for the billing period, like any other active endpoint that consumes resources. A couple of details to note: - The endpoints are shortlived. They are suspended 30 seconds after you stop querying. - Ephemeral endpoints are created with a .50 CU compute size, which has 0.50 vCPU size with 2 GB of RAM. This is Neon's second smallest compute size. For more about compute sizes in Neon, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). --- # Source: https://neon.com/llms/guides-time-travel-tutorial.txt # Time Travel tutorial > The "Time Travel tutorial" document guides Neon users through the process of using time travel features to query historical data states within their databases. ## Source - [Time Travel tutorial HTML](https://neon.com/docs/guides/time-travel-tutorial): The original HTML version of this documentation This guide demonstrates how you could use Time Travel to address a common development scenario: debugging issues following a CI/CD deployment to production. In this scenario, your team has recently introduced a streamlined checkout process, managed by a `new_checkout_process` feature flag. Soon after this flag was enabled, customer support started receiving complaints related to the new feature. As a developer, you're tasked with investigating the issues to confirm whether they are directly linked to the feature's activation. ## Before You Start To follow this tutorial, you'll need: - A Neon account. [Sign up here](https://neon.com/docs/get-started/signing-up). - A [restore window](https://neon.com/docs/manage/projects#configure-restore-window) that covers the timeframe of interest, allowing for effective use of Time Travel. ## Preparing Your Database To simulate this scenario, create a `feature_flags` table used for controlling new feature availability. 1. **Create `project_db` Database:** In the **Neon Console**, create a new database named `project_db`. 2. **Initialize `feature_flags` Table:** Execute the following in the **SQL Editor**, with `product_db` selected as the database: ```sql CREATE TABLE feature_flags ( feature_name TEXT PRIMARY KEY, enabled BOOLEAN NOT NULL ); ``` 3. **Insert Sample Data:** Populate the table with an initial feature flag: ```sql INSERT INTO feature_flags (feature_name, enabled) VALUES ('new_checkout_process', FALSE); ``` This setup reflects a typical development stage: the feature is integrated and deployment-ready but remains inactive, awaiting activation. ## Simulating Feature Flag Activation Now, we'll simulate the process of enabling this feature flag to release the feature. ### Enable the Feature Flag Execute the following SQL command in the **SQL Editor** to simulate activating the feature by changing the feature flag's status to `TRUE`. ```sql UPDATE feature_flags SET enabled = TRUE WHERE feature_name = 'new_checkout_process'; ``` This action mirrors enabling a new feature in your production environment, typically managed as part of your CI/CD pipeline. ## Determine exactly when the feature was enabled Since user complaints started coming in right after the feature was enabled, our first debug step is to confirm the exact moment the `new_checkout_process` feature flag was activated. Assume we've checked the deployment logs or CI/CD pipeline history and found the activation timestamp to be `2025-08-06 at 10:52 PM IST`. For this tutorial, locate the timestamp of the `UPDATE` operation in the **History** tab of the **SQL Editor**: **Note**: Timestamps in the Neon Console are shown in your local timezone. The time in this screenshot converts from `2025-08-06 at 10:52:00:00 PM IST` to `2025-08-06 at 5:22:00 PM UTC`. ## Verifying Feature Flag Pre-Activation Status Let's confirm that the feature was indeed disabled just before the feature flag's activation. 1. Enable the Time Travel toggle in the **SQL Editor**. 1. Enter a time period just before the identified activation timestamp. For our purposes, we'll select `2025-08-06 at 10:51:00:00 PM IST`, which is one minute before our activation time. ```sql SELECT * FROM feature_flags WHERE feature_name = 'new_checkout_process'; ``` We'll see the feature flag shows as `f` for false, as expected. ## Analyzing Post-Activation State With the pre-activation state confirmed, now check the feature flag's status immediately after activation. ### Adjust Time Selector to Post-Activation: Move to a time just after the feature's activation. For example, one minute after the timestamp copied from Step 2, so `2025-08-06 at 10:53:00:00 PM IST`. Re-execute the query. ```sql SELECT * FROM feature_flags WHERE feature_name = 'new_checkout_process'; ``` Now, we see the `new_checkout_process` feature flag is `t` for true, confirming that enabling the feature caused the reported issues. With this confirmation we can move on to our follow-up actions: fix the problem, turn off the feature flag, update stakeholders, or engage in a feedback loop with users to refine the feature based on real-world usage. --- # Source: https://neon.com/llms/guides-trigger-serverless-functions.txt # Trigger serverless functions > The document outlines how to configure and use triggers in Neon to execute serverless functions, detailing the steps for integrating serverless workflows with database events. ## Source - [Trigger serverless functions HTML](https://neon.com/docs/guides/trigger-serverless-functions): The original HTML version of this documentation Combining your serverless Neon database with [Inngest](https://www.inngest.com/?utm_source=neon&utm_medium=trigger-serverless-functions-guide) enables you to **trigger serverless functions** running on Vercel, AWS, and Cloudflare Worker **based on database changes.** By enabling your serverless functions to react to database changes, you open the door to many use cases. From onboarding to ETL and AI workflows, the possibilities are endless. This guide describes setting up a Neon database, configuring the Inngest integration, and connecting your Serverless functions to your Neon database with Inngest. It covers: - Creating a Neon project and enabling [Logical Replication](https://neon.com/docs/guides/logical-replication-guide). - Configuring the Inngest integration on your Neon database. - Configure your Vercel, AWS, or Cloudflare functions to react to your Neon database changes using Inngest. ## Prerequisites - A Neon account. If you do not have one, see [Sign up](https://neon.com/docs/get-started/signing-up) for instructions. - An Inngest account. You can create a free Inngest account by [signing up](https://app.inngest.com/sign-up?utm_source=neon&utm_medium=trigger-serverless-functions-guide). ## Create a Neon project If you do not have one already, create a Neon project: 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. ## Create a table in Neon To create a table, navigate to the **SQL Editor** in the [Neon Console](https://console.neon.tech/): In the SQL Editor, run the following queries to create a `users` table and insert some data: ```sql CREATE TABLE users ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, email TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW() ); INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com'), ('Bob', 'bob@example.com'), ('Charlie', 'charlie@example.com'), ('Dave', 'dave@example.com'), ('Eve', 'eve@example.com'); ``` ## Enabling Logical Replication on your database The Inngest Integration relies on Neon's Logical Replication feature to get notified upon database changes. Navigate to your Neon Project using the Neon Console and open the **Settings** > **Logical Replication** page. From here, follow the instructions to enable Logical Replication: ## Configuring the Inngest integration Your Neon database is now ready to work with Inngest. To configure the Inngest Neon Integration, navigate to the Inngest Platform, open the [Integrations page](https://app.inngest.com/settings/integrations?utm_source=neon&utm_medium=trigger-serverless-functions-guide), and follow the instructions of the [Neon Integration installation wizard](https://app.inngest.com/settings/integrations/neon/connect?utm_source=neon&utm_medium=trigger-serverless-functions-guide): The Inngest Integration requires Postgres admin credentials to complete its setup. _These credentials are not stored and are only used during the installation process_. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. ## Triggering Serverless functions from database changes Any changes to your Neon database are now dispatched to your Inngest account. To enable your Serverless functions to react to database changes, we will: - Install the Inngest client to your Serverless project - Expose a serverless endpoint enabling Inngest to discover your Serverless functions - Configure your Serverless application environment variables - Connect a Serverless function to any change performed to the `users` table. ### 1. Configuring the Inngest client First, install the Inngest client: ```bash npm i inngest ``` Then, create a `inngest/client.ts` (_or `inngest/client.js`_) file as follows: ```typescript // inngest/client.ts import { Inngest } from 'inngest'; export const inngest = new Inngest({ id: 'neon-inngest-project' }); ``` ### 2. Listen for new `users` rows Any change performed on our Neon database will trigger an [Inngest Event](https://www.inngest.com/docs/features/events-triggers?utm_source=neon&utm_medium=trigger-serverless-functions-guide) as follows: ```json { "name": "db/users.inserted", "data": { "new": { "id": { "data": 2, "encoding": "i" }, "name": { "data": "Charly", "encoding": "t" }, "email": { "data": "charly@inngest.com", "encoding": "t" } }, "table": "users", "txn_commit_time": "2024-09-24T14:41:19.75149Z", "txn_id": 36530520 }, "ts": 1727146545006 } ``` Inngest enables you to create [Inngest Functions](https://www.inngest.com/docs/features/inngest-functions?utm_source=neon&utm_medium=trigger-serverless-functions-guide) that react to Inngest events (here, database changes). Let's create an Inngest Function listening for `"db/users.inserted"` events: ```typescript // inngest/functions/new-user.ts import { inngest } from '../client' export newUser = inngest.createFunction( { id: "new-user" }, { event: "db/users.inserted" }, async ({ event, step }) => { const user = event.data.new await step.run("send-welcome-email", async () => { // Send welcome email await sendEmail({ template: "welcome", to: user.email, }); }); await step.sleep("wait-before-tips", "3d"); await step.run("send-new-user-tips-email", async () => { // Follow up with some helpful tips await sendEmail({ template: "new-user-tips", to: user.email, }); }); } ) ``` ### 3. Exposing your Serverless Functions to Inngest To allow Inngest to run your Inngest Functions, add the following Serverless Function, which serves as a router: Tab: Vercel ```typescript // src/app/api/inngest/route.ts import { serve } from 'inngest/next'; import { inngest } from '@lib/inngest/client'; import newUsers from '@lib/inngest/functions/newUsers'; // Your own functions export const { GET, POST, PUT } = serve({ client: inngest, functions: [newUsers], }); ``` Tab: AWS Lambda ```typescript import { serve } from 'inngest/lambda'; import { inngest } from './client'; import newUsers from './functions/newUsers'; // Your own function export const handler = serve({ client: inngest, functions: [newUsers], }); ``` Tab: Cloudflare Workers ```js // /functions/api/inngest.js import { serve } from 'inngest/cloudflare'; import { inngest } from './client'; import newUsers from './functions/newUsers'; export default { fetch: serve({ client: inngest, functions: [newUsers], }), }; ``` **Note**: You can find more information about serving Inngest Functions in [Inngest's documentation](https://www.inngest.com/docs/reference/serve?utm_source=neon&utm_medium=trigger-serverless-functions-guide#serve-client-functions-options). ### 4. Configuring your Serverless application We can now configure your Serverless application to sync with the Inngest Platform: - **Vercel:** Configure the [Inngest Vercel Integration](https://www.inngest.com/docs/deploy/vercel?utm_source=neon&utm_medium=trigger-serverless-functions-guide). - **AWS Lambda:** Configure a [Lambda function URLs](https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html) and [sync your serve Lambda with Inngest](https://www.inngest.com/docs/apps/cloud?utm_source=neon&utm_medium=trigger-serverless-functions-guide#sync-a-new-app-in-inngest-cloud). - **Cloudflare Workers:** [Add the proper environment variables](https://www.inngest.com/docs/deploy/cloudflare?utm_source=neon&utm_medium=trigger-serverless-functions-guide) to your Cloudflare Pages project and [sync with Inngest](https://www.inngest.com/docs/apps/cloud?utm_source=neon&utm_medium=trigger-serverless-functions-guide#sync-a-new-app-in-inngest-cloud). ### 5. Testing our Serverless function We are now all set! Go to the **Tables** page in the Neon Console and add a new record to the `users` table: You should see a new run of the `new-user` function appear on the [Inngest Platform](https://app.inngest.com/?utm_source=neon&utm_medium=trigger-serverless-functions-guide): ## Going further Your Serverless functions can now react to your Neon database changes. In addition to being good for system design, Inngest has some special features that work great with database triggers: - **[Fan-out](https://www.inngest.com/docs/guides/fan-out-jobs?utm_source=neon&utm_medium=trigger-serverless-functions-guide)**: Lets **one database event start multiple functions** at the same time. For example, when a new user is added, it could send a welcome email and set up a free trial, all at once. - **[Batching](https://www.inngest.com/docs/guides/batching?utm_source=neon&utm_medium=trigger-serverless-functions-guide)** **Groups many database changes together** to handle them more efficiently. It's useful when you need to update lots of things at once, like when working with online stores. - **[Flow control](https://www.inngest.com/docs/guides/flow-control?utm_source=neon&utm_medium=trigger-serverless-functions-guide)**: Helps manage how often functions run. It can slow things down to **avoid overloading systems, or wait a bit to avoid doing unnecessary work**. This is helpful when working with other services that have limits on how often you can use them. --- # Source: https://neon.com/llms/guides-typeorm.txt # Connect from TypeORM to Neon > The document outlines the steps required to establish a connection between TypeORM and Neon, detailing configuration settings and code examples necessary for integrating the two platforms effectively. ## Source - [Connect from TypeORM to Neon HTML](https://neon.com/docs/guides/typeorm): The original HTML version of this documentation TypeORM is an open-source ORM that lets you to manage and interact with your database. This guide covers the following topics: - [Connect to Neon from TypeORM](https://neon.com/docs/guides/typeorm#connect-to-neon-from-typeorm) - [Use connection pooling with TypeORM](https://neon.com/docs/guides/typeorm#use-connection-pooling-with-typeorm) - [Connection timeouts](https://neon.com/docs/guides/typeorm#connection-timeouts) ## Connect to Neon from TypeORM To establish a basic connection from TypeORM to Neon, perform the following steps: 1. Retrieve your Neon connection string. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Select a branch, a user, and the database you want to connect to. A connection string is constructed for you. The connection string includes the user name, password, hostname, and database name. 2. Update the TypeORM's DataSource initialization in your application to the following: ```typescript {4,5,6} import { DataSource } from 'typeorm'; export const AppDataSource = new DataSource({ type: 'postgres', url: process.env.DATABASE_URL, ssl: true, entities: [ /*list of entities*/ ], }); ``` 3. Add a `DATABASE_URL` variable to your `.env` file and set it to the Neon connection string that you copied in the previous step. We also recommend adding `?sslmode=require&channel_binding=require` to the end of the connection string to ensure a [secure connection](https://neon.com/docs/connect/connect-securely). Your setting will appear similar to the following: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require" ``` **Tip**: TypeORM leverages a [node-postgres](https://node-postgres.com) Pool instance to connect to your Postgres database. Installing [pg-native](https://npmjs.com/package/pg-native) and setting the `NODE_PG_FORCE_NATIVE` environment variable to `true` [switches the `pg` driver to `pg-native`](https://github.com/brianc/node-postgres/blob/master/packages/pg/lib/index.js#L31-L34), which, according to some users, produces noticeably faster response times. ## Use connection pooling with TypeORM Serverless functions can require a large number of database connections as demand increases. If you use serverless functions in your application, we recommend that you use a pooled Neon connection string, as shown: ```ini # Pooled Neon connection string DATABASE_URL="postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require" ``` A pooled Neon connection string adds `-pooler` to the endpoint ID, which tells Neon to use a pooled connection. You can add `-pooler` to your connection string manually or copy a pooled connection string from the **Connect to your database** modal, which you can access by clicking **Connect** on your **Project Dashboard**. Enable the **Connection pooling** toggle to add the `-pooler` suffix. ## Connection timeouts A connection timeout that occurs when connecting from TypeORM to Neon causes an error similar to the following: ```text Error: P1001: Can't reach database server at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432` Please make sure your database server is running at `ep-white-thunder-826300.us-east-2.aws.neon.tech`:`5432`. ``` This error most likely means that the TypeORM query timed out before the Neon compute was activated. A Neon compute has two main states: _Active_ and _Idle_. Active means that the compute is currently running. If there is no query activity for 5 minutes, Neon places a compute into an idle state by default. When you connect to an idle compute from TypeORM, Neon automatically activates it. Activation typically happens within a few seconds but added latency can result in a connection timeout. To address this issue, you can adjust your Neon connection string by adding a `connect_timeout` parameter. This parameter defines the maximum number of seconds to wait for a new connection to be opened. The default value is 5 seconds. A higher setting may provide the time required to avoid connection timeouts. For example: ```text DATABASE_URL="postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require&channel_binding=require&connect_timeout=10" ``` **Note**: A `connect_timeout` setting of 0 means no timeout. --- # Source: https://neon.com/llms/guides-uploadcare.txt # Media storage with Uploadcare > The document outlines how to integrate Uploadcare with Neon for efficient media storage, detailing configuration steps and API usage to manage and store media files within Neon's infrastructure. ## Source - [Media storage with Uploadcare HTML](https://neon.com/docs/guides/uploadcare): The original HTML version of this documentation [Uploadcare](https://uploadcare.com/) provides an cloud platform designed to simplify file uploading, processing, storage, and delivery via a fast CDN. It offers tools that manage and optimize media like images, videos, and documents for your applications. This guide demonstrates how to integrate Uploadcare with Neon by storing file metadata in your Neon database while using Uploadcare for file uploads and storage. ## Setup steps ## Create a Neon project 1. Navigate to [pg.new](https://pg.new) to create a new Neon project. 2. Copy the connection string by clicking the **Connect** button on your **Project Dashboard**. For more information, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## Create an Uploadcare account and project 1. Sign up for an account at [Uploadcare.com](https://uploadcare.com/). 2. Create a new project within your Uploadcare dashboard. 3. Navigate to your project's **API Keys** section. 4. Note your **Public Key** and **Secret Key**. They are needed to interact with the Uploadcare API and widgets. ## Create a table in Neon for file metadata We need to create a table in Neon to store metadata about the files uploaded to Uploadcare. This table will include fields for the file's unique identifier, URL, upload timestamp, and any other relevant metadata you want to track. 1. You can run the create table statement using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) that is connected to your Neon database. Here is an example SQL statement to create a simple table for file metadata which includes a file ID, URL, user ID, and upload timestamp: ```sql CREATE TABLE IF NOT EXISTS uploadcare_files ( id SERIAL PRIMARY KEY, file_id TEXT NOT NULL UNIQUE, file_url TEXT NOT NULL, user_id TEXT NOT NULL, upload_timestamp TIMESTAMPTZ DEFAULT NOW() ); ``` 2. Run the SQL statement. You can add other relevant columns (file size, content type, etc.) depending on your application needs. **Note** Securing metadata with RLS: If you use [Neon's Row Level Security (RLS)](https://neon.com/blog/introducing-neon-authorize), remember to apply appropriate access policies to the `uploadcare_files` table. This controls who can view or modify the object references stored in Neon based on your RLS rules. Note that these policies apply _only_ to the metadata stored in Neon. Access to the actual files is managed by Uploadcare's access controls and settings. ## Upload files to Uploadcare and store metadata in Neon You can integrate file uploads using any of Uploadcare's [many options](https://uploadcare.com/docs/integrations/), which include UI widgets and SDKs tailored for specific languages and frameworks. For the examples in this guide, we will use the Uploadcare API directly. Feel free to choose the integration method that best fits your project; the fundamental approach of storing metadata in Neon remains the same. Tab: JavaScript For this example, we'll build a simple Node.js server using [Hono](https://hono.dev/) to handle file uploads. It will use the [`@uploadcare/upload-client`](https://www.npmjs.com/package/@uploadcare/upload-client) package to upload files to Uploadcare and [`@neondatabase/serverless`](https://www.npmjs.com/package/@neondatabase/serverless) package to save metadata into your Neon database. First, install the necessary dependencies: ```bash npm install @uploadcare/upload-client @neondatabase/serverless @hono/node-server hono ``` Create a `.env` file in your project root and add your Uploadcare and Neon connection details which you obtained in the previous steps: ```env UPLOADCARE_PUBLIC_KEY=your_uploadcare_public_key DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```javascript import { serve } from '@hono/node-server'; import { Hono } from 'hono'; import { uploadFile } from '@uploadcare/upload-client'; import { neon } from '@neondatabase/serverless'; import 'dotenv/config'; const sql = neon(process.env.DATABASE_URL); const app = new Hono(); // Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. const authMiddleware = async (c, next) => { c.set('userId', 'user_123'); // Example: Get user ID after validation await next(); }; app.post('/upload', authMiddleware, async (c) => { try { // 1. Get User ID and File Data const userId = c.get('userId'); const formData = await c.req.formData(); const file = formData.get('file'); const fileName = formData.get('fileName') || file.name; const buffer = Buffer.from(await file.arrayBuffer()); // 2. Upload to Uploadcare const result = await uploadFile(buffer, { publicKey: process.env.UPLOADCARE_PUBLIC_KEY, fileName: fileName, contentType: file.type, }); // 3. Save Metadata to Neon // Uses file_id (Uploadcare UUID), file_url (CDN URL), and user_id await sql` INSERT INTO uploadcare_files (file_id, file_url, user_id) VALUES (${result.uuid}, ${result.cdnUrl}, ${userId}) `; console.log(`Uploaded ${result.uuid} for user ${userId} to ${result.cdnUrl}`); return c.json({ success: true, fileUrl: result.cdnUrl }); } catch (error) { console.error('Upload Error:', error); return c.json({ success: false, error: 'Upload failed' }, 500); } }); const port = 3000; serve({ fetch: app.fetch, port }, (info) => { console.log(`Server running at http://localhost:${info.port}`); }); ``` **Explanation** 1. **Setup:** It initializes the Neon database client and the Hono web framework. It relies on environment variables (`DATABASE_URL`, `UPLOADCARE_PUBLIC_KEY`) being set, via a `.env` file. 2. **Authentication:** A placeholder `authMiddleware` is included. **Crucially**, this needs to be replaced with real authentication logic. It currently just sets a static `userId` for demonstration. 3. **Upload Endpoint (`/upload`):** - It expects a `POST` request with `multipart/form-data`. - It retrieves the user ID set by the middleware. - It extracts the `file` data and `fileName` from the form data. - It uploads the file content directly to Uploadcare. - Upon successful upload, Uploadcare returns details including a unique `uuid` and a `cdnUrl`. - It executes an `INSERT` statement using the Neon serverless driver to save the `uuid`, `cdnUrl`, and the `userId` into a `uploadcare_files` table in your database. - It sends a JSON response back to the client containing the `fileUrl` from Uploadcare. Tab: Python For this example, we'll build a simple [Flask](https://flask.palletsprojects.com/en/stable/) server to handle file uploads. It will use the [`pyuploadcare`](https://pypi.org/project/pyuploadcare/) package to upload files to Uploadcare and [`psycopg2`](https://pypi.org/project/psycopg2/) to save metadata into your Neon database. First, install the necessary dependencies: ```bash pip install Flask pyuploadcare psycopg2-binary python-dotenv ``` Create a `.env` file in your project root and add your Uploadcare and Neon connection details which you obtained in the previous steps: ```env UPLOADCARE_PUBLIC_KEY=your_uploadcare_public_key UPLOADCARE_SECRET_KEY=your_uploadcare_secret_key DATABASE_URL=your_neon_database_connection_string ``` The following code snippet demonstrates this workflow: ```python import os import psycopg2 from dotenv import load_dotenv from flask import Flask, jsonify, request from pyuploadcare import Uploadcare load_dotenv() # Use a global PostgreSQL connection instead of creating a new one for each request in production def get_database(): return psycopg2.connect(os.getenv("DATABASE_URL")) # Replace this with your actual user authentication logic, by validating JWTs/Headers, etc. def get_authenticated_user_id(request): return "user_123" # Example: Get user ID after validation uploadcare = Uploadcare( public_key=os.getenv("UPLOADCARE_PUBLIC_KEY"), secret_key=os.getenv("UPLOADCARE_SECRET_KEY"), ) app = Flask(__name__) @app.route("/upload", methods=["POST"]) def upload_file(): try: # 1. Get User ID and File from the request user_id = get_authenticated_user_id(request) file = request.files["file"] if file: # 2. Upload the file to Uploadcare response = uploadcare.upload(file) file_url = response.cdn_url # 3. Save Metadata to Neon # Uses file_id (Uploadcare UUID), file_url (CDN URL), and user_id conn = get_database() cursor = conn.cursor() cursor.execute( "INSERT INTO uploadcare_files (file_id, file_url, user_id) VALUES (%s, %s, %s)", (response.uuid, file_url, user_id), ) conn.commit() cursor.close() conn.close() return jsonify({"success": True, "fileUrl": response.cdn_url}) else: return jsonify({"success": False, "error": "No file provided"}) except Exception as e: print(f"Upload Error: {e}") return jsonify({"success": False, "error": "File upload failed"}), 500 if __name__ == "__main__": app.run(port=3000, debug=True) ``` **Explanation** 1. **Setup:** Initializes the Flask web framework, Uploadcare client, and the PostgreSQL client (`psycopg2`) using environment variables. 2. **Authentication:** A placeholder `get_authenticated_user_id` function is included. **Replace this with real authentication logic.** 3. **Upload Endpoint (`/upload`):** - It expects a `POST` request with `multipart/form-data`. - It retrieves the user ID set by the authentication function. - It extracts the `file` data from the form data. - It uploads the file content directly to Uploadcare. - Upon successful upload, Uploadcare returns details including a unique `uuid` and a `cdnUrl`. - It executes an `INSERT` statement using `psycopg2` to save the `uuid`, `cdnUrl`, and the `userId` into a `uploadcare_files` table in your database. - It sends a JSON response back to the client containing the `fileUrl` from Uploadcare. 4. In production, you should use a global PostgreSQL connection instead of creating a new one for each request. This is important for performance and resource management. ## Testing the upload endpoint Once your server (Node.js or Python example) is running, you can test the `/upload` endpoint to ensure files are correctly sent to Uploadcare and their metadata is stored in Neon. You'll need to send a `POST` request with `multipart/form-data` containing a field named `file`. Open your terminal and run a command similar to this, replacing `/path/to/your/image.jpg` with the actual path to a file you want to upload: ```bash curl -X POST http://localhost:3000/upload \ -F "file=@/path/to/your/image.jpg" \ -F "fileName=my-test-image.jpg" ``` - `-X POST`: Specifies the HTTP method. - `http://localhost:3000/upload`: The URL of your running server's endpoint. - `-F "file=@/path/to/your/image.jpg"`: Specifies a form field named `file`. The `@` symbol tells cURL to read the content from the specified file path. - `-F "fileName=my-test-image.jpg"`: Sends an additional form field `fileName`. **Expected outcome:** - You should receive a JSON response similar to: ```json { "success": true, "fileUrl": "https://ucarecdn.com/xxxxxx-xxxxxx-xxxxx/" } ``` You can now integrate calls to this `/upload` endpoint from various parts of your application (e.g., web clients, mobile apps, backend services) to handle file uploads. ## Accessing file metadata and files Storing metadata in Neon allows your application to easily retrieve references to the files uploaded to Uploadcare. Query the `uploadcare_files` table from your application's backend when needed. **Example SQL query:** Retrieve files for user 'user_123': ```sql SELECT id, -- Your database primary key file_id, -- Uploadcare UUID file_url, -- Uploadcare CDN URL user_id, -- The user associated with the file upload_timestamp FROM uploadcare_files WHERE user_id = 'user_123'; -- Use actual authenticated user ID ``` **Using the data:** - The query returns rows containing the file metadata stored in Neon. - The crucial piece of information is the `file_url`. This is the direct link (CDN URL) to the file stored on Uploadcare. - You can use this `file_url` in your application (e.g., in frontend `` tags, API responses, download links) wherever you need to display or provide access to the file. This pattern separates file storage and delivery (handled by Uploadcare) from structured metadata management (handled by Neon). ## Resources - [Uploadcare documentation](https://uploadcare.com/docs/) - [Uploadcare access control with signed URLs](https://uploadcare.com/docs/security/secure-delivery/) - [Neon RLS](https://neon.com/docs/guides/neon-rls) --- # Source: https://neon.com/llms/guides-vercel-connection-methods.txt # Connecting to Neon from Vercel > The document outlines methods for connecting a Neon database to a Vercel application, detailing configuration steps and connection options to ensure seamless integration between the two platforms. ## Source - [Connecting to Neon from Vercel HTML](https://neon.com/docs/guides/vercel-connection-methods): The original HTML version of this documentation What you will learn: - [What connection pooling is and why it mattered for serverless](https://neon.com/docs/guides/vercel-connection-methods#the-core-change-connection-pooling) - [The difference between "Classic Serverless" and Vercel's "Fluid compute"](https://neon.com/docs/guides/vercel-connection-methods#the-two-scenarios-classic-serverless-versus-fluid-compute) - [How TCP, HTTP, and WebSocket connections compare on latency](https://neon.com/docs/guides/vercel-connection-methods#comparison-of-connection-methods) - [Our recommendation for connecting from Vercel](https://neon.com/docs/guides/vercel-connection-methods#our-recommendation) Related topics: - [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration) - [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration) - [Benchmarking latency](https://neon.com/docs/guides/benchmarking-latency) --- ## Connecting to Neon from Vercel: Understanding Fluid compute Vercel's **Fluid** compute model fundamentally changes the performance trade-offs for connecting to your Neon database. **The short answer:** With Vercel Fluid, we recommend you use a **standard Postgres TCP connection** (e.g., with the [node-postgres package](https://node-postgres.com/)) and a connection pool. This is the new fastest and most robust method. This guide explains why this is a change, the difference between connection methods, and what you should use. --- ## The core change: Connection pooling The most important concept to understand is **connection pooling**. - **Database connection:** Establishing a connection to a Postgres database is an expensive, multi-step process (called a "handshake") that takes time. - **Connection pooling:** A connection pool is a "cache" of active database connections. When your function needs to talk to the database, it quickly grabs a connection from the pool, uses it, and then returns it (also quickly). The key problem with "classic" serverless was that you could not safely maintain a connection pool. Functions would be suspended while holding idle connections, leading to "leaks" that could exhaust your database's connection limit. --- ## The two scenarios: Classic serverless versus Fluid compute How you connect depends entirely on your compute environment. ### Classic serverless (the "old way") In a traditional serverless environment, each request spins up a new, isolated function instance. That instance runs its code and then shuts down. - **The problem:** Because connection pools were not safe (as noted above), you had to establish a _new_ database connection on _every single request_. - **The latency hit:** A standard TCP connection (the default for Postgres) takes the most "roundtrips" (~8) to establish. This adds significant latency to _every API call_. - **The solution (HTTP/WebSocket):** To solve this, Neon provides the [@neondatabase/serverless](https://neon.com/docs/serverless/serverless-driver) driver, which connects over HTTP or WebSockets. These protocols have _fewer setup roundtrips_ (~3-4), making them much faster _for the first query_. ### Vercel Fluid compute (the "new way") Vercel's Fluid model allows function runs to _reuse warm compute instances_ and share resources. - **The opportunity:** This reuse is the key. It makes **connection pooling possible and safe** in a serverless environment. - **How Fluid makes pooling safe:** Vercel Fluid solves the "leaked connection" problem. It keeps a function alive _just long enough_ to safely close idle connections _before_ the function is suspended, making pooling reliable. - **The new "fastest" method:** You can now establish a TCP connection _once_ and place it in a pool. Subsequent function calls reuse that "warm" connection, skipping the ~8 roundtrip setup cost entirely. - **The result:** Once the connection is established, a direct Postgres TCP connection is the lowest-latency and most performant way to query your database. --- ## Comparison of connection methods This table breaks down the trade-offs, which are all about setup cost versus query speed. | Connection Method | Protocol | Setup Cost (Roundtrips) | Best For... | | :----------------- | :------------ | :---------------------- | :---------------------------------------------------------------------------------------------------- | | **Postgres (TCP)** | `postgres://` | High (~8) | **Fluid compute / Long-running servers.** (Render, Railway). Once established, it's the fastest. | | **HTTP** | `http://` | Lowest (~3) | **Classic Serverless.** Fastest for a _single query_ where you can't pool connections. | | **WebSocket** | `ws://` | Low (~4) | **Classic Serverless.** A good alternative to HTTP, especially in environments that don't support it. | _Note: Roundtrip counts are estimates and vary based on authentication and configuration._ --- ## Our recommendation ### If you are using Vercel's Fluid compute: We recommend using a standard Postgres TCP driver (like [node-postgres](https://node-postgres.com/)) and implementing a connection pool. This will give you the best performance by paying the connection cost once and reusing the connection for subsequent queries. See Vercel's [Connection pooling with Vercel Functions](https://vercel.com/guides/connection-pooling-with-functions) guide for implementation details. **Note** A note on benchmarking: Before migrating, we recommend you benchmark both connection methods on your own app. While TCP with pooling is the new default, some applications with a very high number of cold starts might, in edge cases, still see an advantage from the low initial connection time of the HTTP driver. ### If you are on a "classic" serverless platform (without connection pooling): Continue using the [@neondatabase/serverless](https://neon.com/docs/serverless/serverless-driver) driver. Its HTTP-based connection is optimized for low-latency "first queries," which is the most important metric in that environment. You can see a live latency comparison of these three methods here: [Function latency comparison](https://function-database-latency-sigma.vercel.app) --- # Source: https://neon.com/llms/guides-vercel-managed-integration.txt # Connecting with the Vercel-Managed Integration > This document details the process for integrating Neon with Vercel using the Vercel-managed integration, enabling users to seamlessly connect their Neon databases to Vercel applications. ## Source - [Connecting with the Vercel-Managed Integration HTML](https://neon.com/docs/guides/vercel-managed-integration): The original HTML version of this documentation What you will learn: - [What the Vercel-Managed Integration is](https://neon.com/docs/guides/vercel-managed-integration#about-this-integration) - [How to install it from the Vercel Marketplace](https://neon.com/docs/guides/vercel-managed-integration#installation-walkthrough) - [How (and why) to enable automated Preview Branching](https://neon.com/docs/guides/vercel-managed-integration#enable-automated-preview-branching-recommended) - [Where to manage billing and configuration](https://neon.com/docs/guides/vercel-managed-integration#managing--billing) Related topics: - [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration) - [Manual Connections](https://neon.com/docs/guides/vercel-manual) --- ## About this integration **Vercel-Managed Integration** (also known as _Neon Postgres Native Integration_) lets you add a Neon Postgres database to your Vercel project **with billing handled entirely inside Vercel**. Installing it: - Creates a Neon account + project for you (if you don't already have one) - For existing Neon users, adds a new organization named `Vercel: ` to your account - Injects the required database environment variables (`DATABASE_URL`, etc.) into your Vercel project - Optionally creates a dedicated database branch for every Preview Deployment so you can test schema changes safely **Note** Who should use this path?: Choose the Vercel-Managed Integration if you **do not already have a Neon account** *or* you prefer to consolidate payment for Neon inside your Vercel invoice. --- ## Installation walkthrough ## Open Neon integration Open the [Neon integration on the Vercel Marketplace](https://vercel.com/marketplace/neon) and click **Install**. ## Add the integration in Vercel This opens the **Install Neon** modal where you can choose between two options. Select **Create New Neon Account**, then click **Continue**. ## Complete Vercel's configuration Accept the terms, pick a region & plan, then name your database. (Remember: a "Database" in Vercel is a **Project** in Neon.) ## View storage settings After creation you'll land on Vercel's **Storage** tab that includes status, plan, connection string, billing plan, and more. ## Optionally open the project in the Neon Console From the **Storage** tab, click **Open in Neon** to jump straight to your new Neon project dashboard in the Neon Console. You'll notice it lives in an organization named `Vercel: `. --- ## Connecting the database to a Vercel project 1. In **Storage → `` → Connect Project** choose the Vercel project and the environments that should receive database variables (Development, Preview, Production). 2. (Optional) Under **Advanced Options → Deployments Configuration** enable **Preview** to turn on _Preview Branching_ (see next section). 3. Click **Connect**. **Tip** Environment variable prefix: You can add a prefix if you have multiple databases in the same project, e.g. `PRIMARY_`. --- ## Enable automated preview branching (recommended) Preview branching creates an isolated Neon branch (copy-on-write) for every Vercel Preview Deployment so database schema changes can be tested safely. To enable: 1. While connecting the project (step above) toggle **Required → Preview**. 2. Make sure **Resource must be active before deployment** is also on. This allows Vercel to wait for the branch to be ready. Once enabled, the flow looks like this: 1. Developer pushes to feature branch → Vercel kicks off Preview Deployment. 2. Vercel sends a webhook to Neon → Neon creates branch `preview/`. 3. Environment variables for the branch connection are injected via webhook at deployment time, overriding preview environment variables for this deployment only (cannot be accessed or viewed in your Vercel project's environment variable settings). 4. (Optional) Run migrations in build step so schema matches code. To apply schema changes automatically, add migration commands to your Vercel build configuration: 1. Go to **Vercel Dashboard → Settings → Build and Deployment Settings** 1. Enable **Override** and add your build commands, including migrations, for example: ```bash npx prisma migrate deploy && npm run build ``` This ensures schema changes in your commits are applied to each preview deployment's database branch. ### Test the setup To verify preview branching works: 1. Create a local branch: `git checkout -b test-feature` 2. Make any change and commit: `git commit -a -m "Test change"` 3. Push: `git push` 4. Check Vercel deployments and Neon Console branches to confirm the preview branch was created --- ## Automatic branch cleanup Preview branches are automatically deleted when their corresponding Vercel deployments are deleted. This keeps your Neon project organized and reduces storage usage. **How it works:** - Each Git branch can have multiple Vercel deployments, all using the same Neon branch. - When the last deployment for a Git branch is deleted (manually or via Vercel's deployment retention policy), Neon automatically deletes the corresponding database branch. - Cleanup happens when deployments are deleted, which you can configure using [Vercel's retention policy settings](https://vercel.com/docs/deployment-retention). By default, Pre-Production Deployments (preview environments) are retained for 180 days: **Note**: This deployment-based cleanup differs from the [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration), which deletes branches when Git branches are deleted. --- ## Managing & billing Because your database is managed by Vercel, you can only perform these actions **in the Vercel dashboard**: - Change plan, billing tier, or scale settings (compute size, autoscaling, scale-to-zero) - View or modify database configuration via **Storage → Settings → Change Configuration** - Monitor usage via **Storage → Usage** (also available in Neon Console) - Create additional databases (each becomes a new Neon project) - Rename or delete a database (deleting removes the underlying Neon project permanently) - Manage members / collaborators (handled through Vercel "Members", not the Neon Console) - (see [FAQ](https://neon.com/docs/guides/vercel-managed-integration#frequently-asked-questions-faq) for details) - Delete the Neon organization (only happens automatically if you uninstall the integration) - Update connection-string environment variables (prefix changes, etc.) Everything else (querying data, branching, monitoring usage) works exactly the same in the Neon Console. ### Team member synchronization Team membership changes in Vercel automatically sync to your Neon organization: - **Role changes**: When a team member's role changes in Vercel, their Neon role updates based on Vercel's JWT token mapping (see [FAQ](https://neon.com/docs/guides/vercel-managed-integration#why-do-vercel-team-members-with-member-role-have-the-admin-role-in-neon) for details). Most Vercel roles (Owner, Admin, Member) map to 'Admin' in Neon, while read-only roles (Viewer, Billing) map to 'Member' in Neon. - **Removals**: When a user is removed from your Vercel team, they're automatically removed from the associated Neon organization. This ensures both platforms stay aligned for security and access control. ### Project transfers between teams When you transfer a Vercel project to another team, the linked Neon project automatically moves to the new team's Neon organization: - The linked Neon project moves from the old organization to the new one. - Environment variables and settings transfer with it. - If the destination's plan doesn't support the project's requirements (autoscaling limits, point-in-time restore window, etc.), you'll be prompted to upgrade. This eliminates the need to manually reconfigure integrations when reorganizing projects. --- ## Common operations ### Add another database (project) 1. Go to **Integrations → Neon Postgres → Manage → More Products → Install** 2. Select region, scale settings, and plan 3. Specify a **Database Name** and click **Create** ### Change compute / scale settings **Storage → Settings → Change Configuration** lets you resize compute, adjust scale-to-zero, or switch Neon plan tiers. Changes apply to _all_ databases in the installation. **Important**: Changing your plan affects **all databases** in this integration, not just the current one. ### Delete the database Deleting from Vercel permanently removes the Neon project and all data. This cannot be undone. To delete: 1. Vercel Dashboard → Storage → Settings 2. Select your database 3. Find Delete Database section and confirm ### Disconnect a project from database To disconnect a Vercel project without deleting the database: 1. Go to **Storage → `` → Projects** 2. Select your project and choose **Remove Project Connection** This removes database environment variables from your Vercel project but keeps the database intact. Previously created preview branches remain but new ones won't be created. ### Manage branches created by the integration Preview branches are automatically deleted when their deployments expire, but you can also manually delete branches via: - [Neon Console](https://neon.com/docs/manage/branches#delete-a-branch) - Individual or bulk deletion - [Neon CLI](https://neon.com/docs/reference/cli-branches#delete) - Command line management - [Neon API](https://neon.com/docs/manage/branches#delete-a-branch-with-the-api) - Programmatic cleanup **Note** Unused branches are archived: Branches you don't delete are eventually archived, consuming archive storage space. See [Branch archiving](https://neon.com/docs/guides/branch-archiving). --- ## Environment variables set by the integration | Variable | Purpose | | :---------------------------------------------------------------- | :------------------------------------------------------------------ | | `DATABASE_URL` | Pooled connection string (PgBouncer) | | `DATABASE_URL_UNPOOLED` | Direct connection string | | `PGHOST`, `PGHOST_UNPOOLED`, `PGUSER`, `PGDATABASE`, `PGPASSWORD` | Raw pieces to build custom strings | | `POSTGRES_*` (legacy) | Provided for backwards compatibility with Vercel Postgres templates | | `NEXT_PUBLIC_STACK_PROJECT_ID`, `STACK_SECRET_SERVER_KEY`, etc. | Neon Auth variables for drop-in authentication | > **Neon Auth variables** automatically sync user profiles to your database in the `neon_auth.users_sync` table, enabling authentication without additional setup. Learn more in the [Neon Auth guide](https://neon.com/docs/guides/neon-auth). --- ## Limitations - You cannot use this integration with the **Neon-Managed integration** in the same Vercel project - **Neon CLI access**: Requires API key authentication (the `neon auth` command won't work since the account is Vercel-managed) - Cannot install if you currently use Vercel Postgres (deprecated) - contact Vercel about transitioning - **Preview deployment environment variables**: Branch-specific connection variables cannot be accessed or viewed in your Vercel project's environment variable settings (they're injected at deployment time only and not stored to avoid manual cleanup when branches are deleted) ## Frequently Asked Questions (FAQ) ### Why can't I see Vercel team members in the Neon Console? Users added to your Vercel team aren't automatically visible in the Neon organization. Team members only appear in Neon when they: 1. Click the **Open in Neon** button from the Vercel integration page 2. Complete the authentication flow ### Why do Vercel team members with 'Member' role have the 'Admin' role in Neon? This occurs due to how Vercel's JWT tokens map roles to the integration. According to [Vercel's documentation](https://vercel.com/docs/integrations/create-integration/marketplace-api#user-authentication), the JWT token's `user_role` claim doesn't directly map Vercel team roles: - **ADMIN role in JWT**: Granted to users capable of installing integrations (includes Vercel Owner, Admin, and Member roles) → maps to Admin in Neon. - **USER role in JWT**: Only granted to users with read-only Vercel roles (includes Billing and Viewer roles) → maps to Member in Neon. As a result, most active Vercel team members receive Admin access in the Neon organization. This is expected behavior and ensures team members can fully manage database resources. --- # Source: https://neon.com/llms/guides-vercel-manual.txt # Connect Vercel and Neon manually > The document "Connect Vercel and Neon manually" outlines the step-by-step process for manually integrating a Neon database with a Vercel application, detailing configuration settings and connection parameters specific to Neon's environment. ## Source - [Connect Vercel and Neon manually HTML](https://neon.com/docs/guides/vercel-manual): The original HTML version of this documentation What you will learn: - [When to use manual connections over integrations](https://neon.com/docs/guides/vercel-manual#when-to-choose-this-path) - [How to connect using environment variables](https://neon.com/docs/guides/vercel-manual#connection-steps) - [Advanced CI/CD automation options](https://neon.com/docs/guides/vercel-manual#cicd-based-preview-branching-github-actions) Related topics: - [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration) - [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration) - [Automate branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions) --- ## When to choose this path Choose manual connection if you prefer not to install a Marketplace integration. This approach is ideal when you: - Deploy via a custom pipeline (self-hosted CI, monorepo, etc.) - Need non-Vercel hosting (e.g. Cloudflare Workers + Vercel Functions hybrid) - Want full control over branch naming, seeding, migration, or teardown If you simply want Neon and Vercel with minimal setup, stick to the managed integrations. They're simpler and include UI support. --- ## Prerequisites - Neon project with database (get a connection string via **Connect** in the Console) - Deployed Vercel project --- ## Connection steps 1. Copy the connection string from the [Neon Console](https://console.neon.tech). Click **Connect** on your Project Dashboard, select the branch, role, and database you want, then copy the _Connection string_. For example: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ^ ^ ^ |- |- |- ``` 2. In the Vercel dashboard, open your project and navigate to **Settings → Environment Variables**. 3. Add either: ```text Key: DATABASE_URL Value: ``` _or_ the granular `PG*` variables: ```text PGUSER=alex PGHOST=ep-cool-darkness-123456.us-east-2.aws.neon.tech PGDATABASE=dbname PGPASSWORD=AbC123dEf PGPORT=5432 ``` **Note**: Neon uses the default Postgres port, `5432`. 4. Select which environments need database access (Production, Preview, Development) and click **Save**. 5. Redeploy your application (or wait for your next deployment) for the variables to take effect. That's it. Your Vercel app now connects to Neon just like any other Postgres database. --- ## CI/CD-based Preview Branching (GitHub Actions) Looking for a full CI/CD recipe? See **[Automate branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions)**. --- # Source: https://neon.com/llms/guides-vercel-overview.txt # Integrating Neon with Vercel > The document outlines the process for integrating Neon with Vercel, detailing steps for connecting a Neon database to a Vercel application to facilitate seamless deployment and management of serverless applications. ## Source - [Integrating Neon with Vercel HTML](https://neon.com/docs/guides/vercel-overview): The original HTML version of this documentation ## Overview This page helps you quickly choose the best Neon–Vercel integration for your project. Whether you're starting fresh or have existing infrastructure, we'll guide you to the right solution. **Tip** Quick decision guide: Choose the **Vercel-Managed Integration** if you're new to Neon and want unified billing through Vercel. Choose the **Neon-Managed Integration** if you already have a Neon account or prefer to manage billing directly with Neon. --- ## Compare the options at a glance | Feature / Attribute | [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration) | [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration) | [Manual Connection](https://neon.com/docs/guides/vercel-manual) | | :---------------------- | :---------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------- | :---------------------------------------------- | | **Ideal for** | New users, teams wanting a single Vercel bill | Existing Neon users, direct Neon billing | Integration not required or custom | | **Neon account** | Created automatically via Vercel | Pre-existing Neon account | Pre-existing Neon account | | **Billing** | Paid **through Vercel** | Paid **through Neon** | Paid **through Neon** | | **Setup method** | Vercel Marketplace → Native Integrations → "Neon Postgres" | Vercel Marketplace → Connectable Accounts → "Neon" | Manual env-vars | | **Preview Branching** | ✅ | ✅ | ✖️ | | **Branch cleanup** | Automatic (deployment-based) | Automatic (Git-branch-based) | N/A | | **Implementation type** | [Native Integration](https://vercel.com/docs/integrations/install-an-integration/product-integration) | [Connectable Account](https://vercel.com/docs/integrations/install-an-integration/add-a-connectable-account) | N/A | --- ## Choose your integration path **Important** Do you need custom CI/CD control?: **If you want to build preview branching into your own CI/CD pipelines (e.g., via GitHub Actions)**, use a **[manual connection](https://neon.com/docs/guides/vercel-manual)** instead of the automated integrations below. For automated integrations, follow this simple flow: ## Do you have an existing Neon account? **Do you already have a Neon account or project you want to keep using?** - **✅ Yes** → Use **[Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration)** - **❌ No** → Continue below ## Choose your billing preference **Where would you like to manage billing for Neon?** - **Through my Vercel account** → Use **[Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration)** - **Directly with Neon** → Use **[Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration)** --- ## Integration options overview - [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration): Create and manage Neon databases directly from your Vercel dashboard. Supports preview branches. - [Neon-Managed Integration](https://neon.com/docs/guides/neon-managed-vercel-integration): Link an existing Neon project to Vercel and keep billing in Neon. Supports preview branches. - [Manual Connection](https://neon.com/docs/guides/vercel-manual): Connect your Vercel project to a Neon database manually. --- ## Next steps ## Get Started Checklist - [ ] Choose your integration type Select Vercel-Managed, Neon-Managed, or Manual based on the decision flow above - [ ] Follow the setup guide Click through to your chosen integration's detailed documentation - [ ] Configure preview branching Set up database branching for your development workflow - [ ] Test your connection Verify your database connection works in both production and preview environments --- # Source: https://neon.com/llms/guides-vercel-postgres-transition-guide.txt # Vercel Postgres Transition Guide > The Vercel Postgres Transition Guide offers detailed instructions for Neon users on migrating their database from Vercel Postgres to Neon, covering configuration adjustments and connection string updates specific to Neon's platform. ## Source - [Vercel Postgres Transition Guide HTML](https://neon.com/docs/guides/vercel-postgres-transition-guide): The original HTML version of this documentation What you will learn: - [What changed in your setup](https://neon.com/docs/guides/vercel-postgres-transition-guide#what-changed-for-you) - [How billing and plans are affected](https://neon.com/docs/guides/vercel-postgres-transition-guide#billing-and-plans) - [What new features you can access](https://neon.com/docs/guides/vercel-postgres-transition-guide#new-features-available) - [Technical compatibility information](https://neon.com/docs/guides/vercel-postgres-transition-guide#compatibility-notes) Related topics: - [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration) - [Migrate from Vercel SDK to Neon](https://neon.com/docs/guides/vercel-sdk-migration) --- ## About the transition Vercel transitioned all Vercel Postgres stores to Neon's native integration (Q4 2024 - Q1 2025). Instead of managing Postgres directly, Vercel now offers database integrations through the [Vercel Marketplace](https://vercel.com/marketplace), giving users more storage options and features. **Note** Terminology change: In Neon, a "Database" in Vercel is called a "Project." Everything else works the same. --- ## What changed for you ### Access and management - **Same login**: Access your databases from both Vercel's **Storage** tab and the Neon Console - **New management options**: Click **Open in Neon** to access advanced database features - **Unified billing**: Everything remains billed through Vercel (no separate Neon billing) ### Automatic plan transitions - **Hobby Plan users** → Neon Free plan (better limits, more features) - **Pro Plan users** → Maintained existing limits with option to upgrade to Neon plans --- ## Billing and plans ### Plan comparison | Plan Transition | Compute Hours | Storage | Databases | Key Changes | | :--------------- | :------------ | :-------------- | :-------- | :-------------------------- | | **Hobby → Free** | 60 → 190 | 256 MB → 512 MB | 1 → 10 | Significant improvements | | **Pro → Legacy** | 100 (same) | 256 MB (same) | 1 (same) | No change until you upgrade | ### Cost comparison (Pro Plan) | Resource | Vercel Pro | Neon Launch ($19/mo) | | :----------------------- | :--------- | :--------------------- | | **Included compute** | 100 hours | 300 hours | | **Included storage** | 256 MB | 10 GB | | **Extra compute** | $0.10/hour | $0.16/hour | | **Extra storage** | $0.12/GB | $1.75/GB (after 10 GB) | | **Data transfer** | $0.10/GB | Free | | **Additional databases** | $1.00 each | Free (up to 100) | **Tip** Upgrade to unlock features: Pro Plan users can stay on legacy limits or upgrade to a Neon plan to access branching, instant restore, and higher limits. [See how to upgrade](https://neon.com/docs/guides/vercel-managed-integration#changing-your-plan). ### Enterprise customers Neon is working with the Vercel team to transition Enterprise customers. If you want to speak to us about an Enterprise-level Neon plan, you can [get in touch with our sales team](https://neon.com/contact-sales). --- ## New features available ### Immediate access (all users) - **Neon Console** - Dedicated database management interface - **CLI support** - Full [Neon CLI](https://neon.com/docs/reference/neon-cli) (Vercel CLI didn't support Postgres) - **Terraform support** - [Neon Terraform provider](https://neon.com/docs/reference/terraform) - **Multiple Postgres roles** - No longer limited to single role - **Larger computes** - Up to 2 vCPUs on Free plan (vs 0.25 CPU limit), more on paid plans - **Multiple Postgres versions** - Upgrade from Postgres 15 to support for Postgres 14, 15, 16, and 17 - **[Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api)** - Programmatic project and database management - **[Organization accounts](https://neon.com/docs/manage/organizations)** - Team and project management - **[Monitoring](https://neon.com/docs/introduction/monitoring-page)** - Database monitoring from Neon Console ### Advanced features (Neon plan required) - **[Database branching](https://neon.com/docs/guides/branching-intro)** - Branch your database like Git - **[Instant restore](https://neon.com/docs/guides/branch-restore)** - Point-in-time recovery (was disabled in Vercel Postgres) - **[Autoscaling](https://neon.com/docs/introduction/autoscaling)** - Automatic performance scaling - **[Scale to zero](https://neon.com/docs/introduction/scale-to-zero)** - Cost-saving idle scaling - **[Read replicas](https://neon.com/docs/introduction/read-replicas)** - Offload read queries - **[Time Travel](https://neon.com/docs/guides/time-travel-assist)** - Query historical data - **[Protected branches](https://neon.com/docs/guides/protected-branches)** - Protect production data - **[Schema Diff](https://neon.com/docs/guides/schema-diff)** - Compare schema changes between branches - **[Logical Replication](https://neon.com/docs/guides/logical-replication-guide)** - Replicate data to and from Neon - **[IP Allow](https://neon.com/docs/introduction/ip-allow)** - Limit access to trusted IP addresses - **[Neon GitHub Integration](https://neon.com/docs/guides/neon-github-integration)** - Connect projects to GitHub repos --- ## Compatibility notes ### SDKs and drivers **Current Vercel SDK** (`@vercel/postgres`): - ✅ **Still works** - No immediate action required - ⚠️ **Will be deprecated** - No longer actively maintained by Vercel **Migration options**: 1. **Maintenance mode**: Switch to `@neondatabase/vercel-postgres-compat` (drop-in replacement) 2. **New projects**: Use `@neondatabase/serverless` (actively developed) 3. **Existing apps**: Follow our [migration guide](https://neon.com/guides/vercel-sdk-migration) ### ORMs and tools All existing integrations continue to work: - Drizzle, Prisma, Kysely - All Postgres-compatible tools - Existing environment variables ### Templates and environment variables - **Existing templates**: [Environment variables](https://neon.com/docs/guides/vercel-managed-integration#environment-variables-set-by-the-integration) used by Vercel Postgres templates continue to work - **New templates**: Find updated [Neon templates](https://vercel.com/templates?database=neon) and [Postgres templates](https://vercel.com/templates?database=neon&database=postgres) on Vercel ### Regional support All Vercel Postgres regions are supported in Neon - no changes needed. --- ## Next steps ## Recommended actions - [ ] [Explore the Neon Console](https://neon.com/docs/guides/vercel-postgres-transition-guide#new-features-available) Click "Open in Neon" from your Vercel Storage tab to see advanced features - [ ] [Consider upgrading your plan](https://neon.com/docs/guides/vercel-postgres-transition-guide#billing-and-plans) Unlock branching, instant restore, and higher limits with Neon plans - [ ] [Plan SDK migration](https://neon.com/docs/guides/vercel-postgres-transition-guide#compatibility-notes) Review migration options for the Vercel SDK to avoid future compatibility issues - [ ] [Test new features](https://neon.com/docs/guides/vercel-postgres-transition-guide#new-features-available) Try database branching for development environments --- ## Questions or issues? - **General questions**: Visit our [Discord #vercel-postgres-transition](https://discord.com/channels/1176467419317940276/1306544611157868544) channel - **Enterprise customers**: [Contact our sales team](https://neon.com/contact-sales) for transition support - **Technical support**: Use the standard Neon support channels --- # Source: https://neon.com/llms/guides-vue.txt # Connect a Vue.js application to Neon > The document details the steps to connect a Vue.js application to a Neon database, including setting up the database, configuring environment variables, and integrating the database connection within the Vue.js app. ## Source - [Connect a Vue.js application to Neon HTML](https://neon.com/docs/guides/vue): The original HTML version of this documentation Vue.js is a progressive JavaScript framework for building user interfaces. Neon Postgres should be accessed from the server side in Vue.js applications. You can achieve this using Vue.js meta-frameworks like Nuxt.js or Quasar Framework. ## Vue Meta-Frameworks Find detailed instructions for connecting to Neon from various Vue.js meta-frameworks. - [Nuxt.js](https://neon.com/docs/guides/nuxt): Connect a Nuxt.js application to Neon --- # Source: https://neon.com/llms/guides-wundergraph.txt # Use WunderGraph with Neon > The document outlines the process of integrating WunderGraph with Neon, detailing steps for setting up a Neon database and configuring WunderGraph to interact with it effectively. ## Source - [Use WunderGraph with Neon HTML](https://neon.com/docs/guides/wundergraph): The original HTML version of this documentation _This guide was contributed by the team at WunderGraph_ WunderGraph is an open-source Backend for Frontend (BFF) framework designed to optimize developer workflows through API composition. Developers can use this framework to compose multiple APIs into a single unified interface and generate typesafe API clients that include authentication and file uploads. This guide shows how you can pair WunderGraph with your Neon database to accelerate application development. With WunderGraph, you can easily introspect your data sources and combine them within your virtual graph. WunderGraph treats APIs as dependencies. You can easily turn your Neon database into a GraphQL API or expose it via JSON-RPC or REST. With an easy-to-deploy Postgres database like Neon, you can now have a 100% serverless stack and build your own stateful serverless apps on the edge. This guide demonstrates setting up a full-stack app with Neon and WunderGraph, securely exposing Neon to your Next.js frontend in under 15 minutes. While WunderGraph and Neon are compatible with a variety of frontend clients, this demo focuses on using Next.js. **Info**: This guide is also available in video format: [Neon with WunderGraph video guide](https://neon.com/docs/guides/wundergraph#neon-with-wundergraph-video-guide). ## Prerequisites - A [WunderGraph Cloud](https://cloud.wundergraph.com/) account - A Neon project. See [Create a Neon project](https://neon.com/docs/manage/projects#create-a-project). ## Installation Sign into [WunderGraph Cloud](https://cloud.wundergraph.com/) and follow these steps: 1. Click **New Project**. 2. Choose the `NEXT.js` template and give your repository a name. 3. Select the region closest to you. 4. Click **Deploy**. The deployment will take a few moments. ### Add sample data to Neon While the project is deploying, add some sample data to your Neon database. 1. Navigate to the [Neon Console](https://console.neon.tech/) and select **SQL Editor** from the sidebar. 2. Run the following SQL statements to add the sample data. ```sql create table if not exists Users ( id serial primary key not null, email text not null, name text not null, unique (email) ); create table if not exists Messages ( id serial primary key not null, user_id int not null references Users(id), message text not null ); insert into Users (email, name) VALUES ('Jens@wundergraph.com','Jens@WunderGraph'); insert into Messages (user_id, message) VALUES ((select id from Users where email = 'Jens@wundergraph.com'),'Hey, welcome to the WunderGraph!'); insert into Messages (user_id, message) VALUES ((select id from Users where email = 'Jens@wundergraph.com'),'This is WunderGraph!'); insert into Messages (user_id, message) VALUES ((select id from Users where email = 'Jens@wundergraph.com'),'WunderGraph!'); alter table Users add column updatedAt timestamptz not null default now(); alter table Users add column lastLogin timestamptz not null default now(); ``` ### Connect Neon and Wundergraph 1. Now that your database has some data, navigate back to WunderGraph Cloud. 2. Select the project you just created and navigate to the **Settings** page. 3. Select the **Integrations** tab and click **Connect Neon**. 4. You are directed to Neon to authorize WunderGraph. Review the permissions and click **Authorize** to continue. You are directed back to WunderGraph Cloud. If you are a part of multiple organizations, you are asked to select the organization to connect with Neon. 5. Select the Neon project and WunderGraph project that you want to connect, and click **Connect Projects**. Your Neon and Wundergraph projects are now connected. **Important**: WunderGraph creates a role named `wundergraph-$project_id` in the Neon project that you selected during the integration process. Please do not delete or change the password for this role. WunderGraph configures a environment variable called `NEON_DATABASE_URL`. Please use this variable wherever you need to provide a database URL. ## Set up the WunderGraph project locally The following steps describe how to set up your Wundergraph project locally and configure access to Neon. 1. In WunderGraph Cloud, select your project and click **View Git repository** to view your WunderGraph project repository. 2. Clone the repository and open it in your IDE. For example: ```bash git clone https://github.com//wundergraph.git cd wundergraph code . ``` 3. After the project is cloned, run the following commands in your project directory: ```bash npm install && npm run dev ``` These commands install the required dependencies and start your project locally. 4. Inside the `.wundergraph` directory, open the `wundergraph.config.ts` file and add Neon as a datasource, as shown below, or simply replace the existing code with this code: ```typescript import { configureWunderGraphApplication, introspect, authProviders, EnvironmentVariable, } from '@wundergraph/sdk'; import operations from './wundergraph.operations'; import server from './wundergraph.server'; const spaceX = introspect.graphql({ apiNamespace: 'spacex', url: 'https://spacex-api.fly.dev/graphql/', }); // Add your neon datasource const neon = introspect.postgresql({ apiNamespace: 'neon', //Your database URL can be found in the Neon Console databaseURL: new EnvironmentVariable('NEON_DATABASE_URL'), }); configureWunderGraphApplication({ // Add neon inside your APIs array apis: [spaceX, neon], server, operations, codeGenerators: [ { templates: [...templates.typescript.all], }, ], }); ``` 5. Write an operation that turns your Neon database into an API that exposes data that you can pass through the frontend. To do so, navigate to the `operations` folder inside your `.wundergraph` directory and create a new file called `Users.graphql`. **Info**: With WunderGraph you can write operations in either GraphQL or TypeScript. Inside your `Users.graphql` file, add the following code: ```graphql { neon_findFirstusers { id name email } } ``` This operation queries your Neon database using GraphQL and exposes the data via JSON-RPC. In the next section, you will add the operation to the frontend. ## Configure the frontend This section describes how to configure the frontend application. 1. In your local project, navigate to the `pages` directory and open the `index.tsx` file. 2. In the `index.tsx` file, make the following three changes or replace the existing code with the code shown below: - Retrieve the data from the `Users` endpoint using the `UseQuery` hook. - On line 62, update the copy to read: "This is the result of your **Users** Query". - On line 66, pass the `users` variable through to the frontend. ```typescript import { NextPage } from 'next'; import { useQuery, withWunderGraph } from '../components/generated/nextjs'; const Home: NextPage = () => { const dragons = useQuery({ operationName: 'Dragons', }); // We want to write this hook to get the data from our Users operation const users = useQuery({ operationName: 'Users', }); const refresh = () => { dragons.mutate(); }; return (

    WunderGraph & Next.js

    Use{' '} WunderGraph {' '} to make your data-source accessible through JSON-RPC to your Next.js app.

    This is the result of your{' '} Users{' '} operation.

    //update dragons to users {JSON.stringify(users, null, 2)}

    Visit{' '} GitHub {' '} to learn more about WunderGraph.

    ); }; export default withWunderGraph(Home); ``` ## Run the application 1. Run `npm run dev`. 2. Navigate to http://localhost:3000 when the application is finished building. If your application runs successfully, you should see the result of your User's operation. 3. To take the setup one step further, commit the changes to your GitHub repository and merge them into your `main` branch. 4. After you merge the changes, navigate to `WunderGraph Cloud` and view out the **Deployments** tab. You should see that a deployment was triggered. Give the deployment a few seconds to finish. 5. When deployment is ready, navigate to the **Operations** tab. You should see the new endpoint that you created and added to your application. Click it to see your data in real time. ## Key takeaways This guide provided a brief demonstration showcasing the capabilities of Neon and WunderGraph, which enable you to turn your Neon database into an API exposed via JSON-RPC and rapidly deploy fully serverless apps on the edge in a matter of minutes. The power of Neon with WunderGraph lies in simplifying the development process, allowing you to focus on creating valuable and efficient applications. In under 15 minutes, you were able to: 1. Create a WunderGraph Cloud account 2. Create a Next.js project hosted in a region near you 3. Set up a Neon database with sample data 4. Connect your WunderGraph application with your Neon database 5. Add Neon to your WunderGraph project using a code first approach 6. Write a GraphQL operation to query your Neon database 7. Update the frontend to display the results of your GraphQL operation securely using JSON-RPC 8. Commit your changes and trigger a deployment without a CI/CD pipeline or Devops team 9. View your new operations in real time with real-time metrics If you had trouble with any of the steps outlined above, refer to the video guide below. ## Neon with WunderGraph video guide --- # Source: https://neon.com/llms/import-import-data-assistant.txt # Import Data Assistant > The "Import Data Assistant" documentation guides Neon users through the process of importing data into their databases, detailing the steps and requirements necessary for successful data migration. ## Source - [Import Data Assistant HTML](https://neon.com/docs/import/import-data-assistant): The original HTML version of this documentation When you're ready to move your data to Neon, our Import Data Assistant can help you automatically copy your existing database to Neon. You only need to provide a connection string to get started. **Note** Beta: **Import Data Assistant** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). **Tip** Migrate between Neon projects: You can also use the **Import Data Assistant** to migrate data between Neon projects. This is useful if you want to upgrade to a newer Postgres version (for example, from Postgres 16 to 17), or move your database to a different region. Just create a new project with the desired Postgres version or region, then use the database connection string from your existing Neon project to import the data into the new one. ## Ways to import The Import Data Assistant always creates a **new branch** for your imported data. There are two ways to launch the import: 1. **From the Projects page:** Start from the project list to create a new project and import your data into a new branch as part of the flow. 2. **From within a project:** Use the Getting Started widget on a project dashboard to import your data into a new branch of the existing project. Both options use the same automated import process — just provide your database connection string and we'll handle the rest. ## Before you start You'll need: - A **Neon account**. Sign up at [Neon](https://neon.tech) if you don't have one. - A **connection string** to your current database in this format: ``` postgresql://username:password@host:port/database?sslmode=require&channel_binding=require ``` - **Admin privileges** on your source database. We recommend using a superuser or a user with the necessary `CREATE`, `SELECT`, `INSERT`, and `REPLICATION` privileges. - A database **smaller than 10 GB** in size for automated import - We recommend migrating to a Neon project created in the same region as your current database. This helps ensure a faster import. There is a 1-hour time limit on import operations. ## Check Compatibility Enter your database connection string and we'll verify: - Database size is within the current 10 GB limit - Postgres version compatibility (Postgres 14 to 17) - Extension compatibility - Region availability ## Import Your Data Once checks pass, we'll: - Create a new branch for your imported data. - Copy your data automatically using `pg_dump` and `pg_restore`. - Verify that the import completed successfully. **Note**: During import, your source database remains untouched — we only read from it to create a copy in Neon. ### Known Limitations - Currently limited to databases **smaller than 10GB**. We are actively working on supporting bigger workloads. In the meantime, contact support if you are looking to migrate bigger databases. - There is a 1-hour limit on import operations. For faster imports, we recommend importing to a Neon project created in the same region as your source database. - The feature is supported in **AWS regions** only. - Supabase and Heroku databases are not supported due to unsupported Postgres extensions. - Databases running on **IPv6 are not supported yet**. - AWS RDS is generally supported, though some incompatibilities may exist. Support for other providers may vary. ## Next Steps After a successful import: 1. Find your newly imported database branch on the **Branches** page of your project. _Imported branches are typically named with a timestamp, as shown here._ 2. Run some test queries to ensure everything imported correctly. 3. Click on the three dots next to the branch name and select **Set as default** to make it your default branch. 4. Optional cleanup: - Delete the old branches (`production` and `development`) if they are no longer needed. - Rename the new branch to `production` for clarity and consistency. 5. Switch your connection string to point to your new Neon database. ## Need Help? - For **technical issues**: [Contact support](https://neon.com/docs/introduction/support) - For **provider-specific questions**: Let us know what database provider you're using when you contact us If your database import failed for any reason, please [contact our support team](https://neon.com/docs/introduction/support). We're here to help you get up and running. --- # Source: https://neon.com/llms/import-import-from-csv.txt # Import data from CSV > The document outlines the process for importing data from CSV files into Neon databases, detailing the necessary steps and commands to execute the import efficiently. ## Source - [Import data from CSV HTML](https://neon.com/docs/import/import-from-csv): The original HTML version of this documentation This topic shows how to import data into a Neon database table from a CSV file using a simple example. The instructions require a working installation of [psql](https://www.postgresql.org/download/). The `psql` client is the native command-line client for Postgres. It provides an interactive session for sending commands to Postgres. For installation instructions, see [How to install psql](https://neon.com/docs/connect/query-with-psql-editor#how-to-install-psql). The following example uses the ready-to-use `neondb` database that is created with your Neon project, a table named `customer`, and a data file named `customer.csv`. Data is loaded from the `customer.csv` file into the `customer` table. ## Connect to your database Connect to the `neondb` database using `psql`. For example: ```bash psql "" ``` You can find your connection string on your Neon Project Dashboard. Click on the **Connect** button. Use the drop-down menu to copy a full `psql` connection command. **Note**: For more information about connecting to Neon with `psql`, see [Connect with psql](https://neon.com/docs/connect/query-with-psql-editor). ## Create the target table Create the `customer` table — table you are importing to must exist in your database and the columns must match your CSV file. ```sql CREATE TABLE customer ( id SERIAL, first_name VARCHAR(50), last_name VARCHAR(50), email VARCHAR(255), PRIMARY KEY (id) ) ``` **Tip**: You can also create tables using the **SQL Editor** in the Neon Console. See [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). ## Prepare the CSV file Prepare a `customer.csv` file with the following data — note that the columns in the CSV file match the columns in the table you created in the previous step. ```text First Name,Last Name,Email 1,Casey,Smith,casey.smith@example.com 2,Sally,Jones,sally.jones@example.com ``` ## Load the data From your `psql` prompt, load the data from the `customer.csv` file using the `\copy` option. ```bash \copy customer FROM '/path/to/customer.csv' DELIMITER ',' CSV HEADER ``` If the command runs successfully, it returns the number of records copied to the database: ```bash COPY 2 ``` For more information about the `\copy` option, refer to the [psql reference](https://www.postgresql.org/docs/current/app-psql.html), in the _PostgreSQL Documentation_. --- # Source: https://neon.com/llms/import-import-sample-data.txt # Postgres sample data > The document outlines the process for importing sample data into a Neon PostgreSQL database, detailing steps to download and load predefined datasets to facilitate testing and development within the Neon environment. ## Source - [Postgres sample data HTML](https://neon.com/docs/import/import-sample-data): The original HTML version of this documentation This guide describes how to download and install sample data for use with Neon. ## Prerequisites - [wget](https://www.gnu.org/software/wget/) for downloading datasets, unless otherwise instructed. If your system does not support `wget`, you can paste the source file address in your browser's address bar. - A `psql` client for connecting to your Neon database and loading data. This client is included with a standalone PostgreSQL installation. See [PostgreSQL Downloads](https://www.postgresql.org/download/). - A `pg_restore` client if you are loading the [employees](https://neon.com/docs/import/import-sample-data#employees-database) or [postgres_air](https://neon.com/docs/import/import-sample-data#postgres-air-database) database. The `pg_restore` client is included with a standalone PostgreSQL installation. See [PostgreSQL Downloads](https://www.postgresql.org/download/). - A Neon database connection string. After creating a database, you can find the connection details by clicking the **Connect** button on your **Project Dashboard**. In the instructions that follow, replace `postgresql://[user]:[password]@[neon_hostname]/[dbname]` with your connection string. - A Neon [paid plan](https://neon.com/docs/introduction/plans) if you intend to install a dataset larger than 0.5 GB. - Instructions for each dataset require that you create a database. You can do so from a client such as `psql` or from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). **Note**: You can also load sample data using the Neon CLI. See [Load sample data with the Neon CLI](https://neon.com/docs/import/import-sample-data#load-sample-data-with-the-neon-cli). ## Sample data Sample datasets are listed in order of the smallest to largest installed size. Please be aware that the Neon Free plan has a storage limit of 500 MB per branch. Datasets larger than 500 MB cannot be loaded on the Free plan. | Name | Tables | Records | Source file size | Installed size | | ----------------------------------------------------------- | ------ | -------- | ---------------- | -------------- | | [Periodic table data](https://neon.com/docs/import/import-sample-data#periodic-table-data) | 1 | 118 | 17 KB | 7.2 MB | | [World Happiness Index](https://neon.com/docs/import/import-sample-data#world-happiness-index) | 1 | 156 | 9.4 KB | 7.2 MB | | [Titanic passenger data](https://neon.com/docs/import/import-sample-data#titanic-passenger-data) | 1 | 1309 | 220 KB | 7.5 MB | | [Netflix data](https://neon.com/docs/import/import-sample-data#netflix-data) | 1 | 8807 | 3.2 MB | 11 MB | | [Pagila database](https://neon.com/docs/import/import-sample-data#pagila-database) | 33 | 62322 | 3 MB | 15 MB | | [Chinook database](https://neon.com/docs/import/import-sample-data#chinook-database) | 11 | 77929 | 1.8 MB | 17 MB | | [Lego database](https://neon.com/docs/import/import-sample-data#lego-database) | 8 | 633250 | 13 MB | 42 MB | | [Employees database](https://neon.com/docs/import/import-sample-data#employees-database) | 6 | 3919015 | 34 MB | 333 MB | | [Wikipedia vector embeddings](https://neon.com/docs/import/import-sample-data#wikipedia-vector-embeddings) | 1 | 25000 | 1.7 GB | 850 MB | | [Postgres air](https://neon.com/docs/import/import-sample-data#postgres-air-database) | 10 | 67228600 | 1.2 GB | 6.7 GB | **Note**: Installed size is measured using the query: `SELECT pg_size_pretty(pg_database_size('your_database_name'))`. The reported size for small datasets may appear larger than expected due to inherent Postgres storage overhead. ### Periodic table data A table containing data about the periodic table of elements. 1. Create a `periodic_table` database: ```sql CREATE DATABASE periodic_table; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/periodic_table.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/periodic_table" -f periodic_table.sql ``` 4. Connect to the `periodic_table` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/periodic_table ``` 5. Look up the element with the Atomic Number 10: ```sql SELECT * FROM periodic_table WHERE "AtomicNumber" = 10; ``` - Source: [https://github.com/andrejewski/periodic-table](https://github.com/andrejewski/periodic-table) - License: [ISC License](https://github.com/andrejewski/periodic-table/blob/master/LICENSE) - `Copyright (c) 2017, Chris Andrejewski ` ### World Happiness Index A dataset with multiple indicators for evaluating the happiness of countries of the world. 1. Create a `world_happiness` database: ```sql CREATE DATABASE world_happiness; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/happiness_index.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/happiness_index" -f happiness_index.sql ``` 4. Connect to the `titanic` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/world_happiness_index ``` 5. Find the countries where the happiness score is above average but the GDP per capita is below average: ```sql SELECT country_or_region, score, gdp_per_capita FROM "2019" WHERE score > (SELECT AVG(score) FROM "2019") AND gdp_per_capita < (SELECT AVG(gdp_per_capita) FROM "2019") ORDER BY score DESC; ``` - Source: [https://www.kaggle.com/datasets/unsdsn/world-happiness](https://www.kaggle.com/datasets/unsdsn/world-happiness) - License: [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Titanic passenger data A dataset containing information on the passengers aboard the RMS Titanic, which sank on its maiden voyage in 1912. 1. Create a `titanic` database: ```sql CREATE DATABASE titanic; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/titanic.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/titanic" -f titanic.sql ``` 4. Connect to the `titanic` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/titanic ``` 5. Query passengers with the most expensive fares: ```sql SELECT name, fare FROM passenger ORDER BY fare DESC LIMIT 10; ``` - Source: [https://www.kaggle.com/datasets/ibrahimelsayed182/titanic-dataset](https://www.kaggle.com/datasets/ibrahimelsayed182/titanic-dataset) - License: [Unknown](https://www.kaggle.com/datasets/vinicius150987/titanic3) ### Netflix data A dataset containing information about movies and tv shows on Netflix. 1. Create a `netflix` database: ```sql CREATE DATABASE netflix; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/netflix.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/netflix" -f netflix.sql ``` 4. Connect to the `netflix` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/netflix ``` 5. Find the directors with the most movies in the database: ```sql SELECT director, COUNT(*) AS "Number of Movies" FROM netflix_shows WHERE type = 'Movie' GROUP BY director ORDER BY "Number of Movies" DESC LIMIT 5; ``` - Source: [https://www.kaggle.com/datasets/shivamb/netflix-shows](https://www.kaggle.com/datasets/shivamb/netflix-shows) - License: [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Pagila database Sample data for a fictional DVD rental store. Pagila includes tables for films, actors, film categories, stores, customers, payments, and more. 1. Create a `pagila` database: ```sql CREATE DATABASE pagila; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/pagila.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/pagila" -f pagila.sql ``` 4. Connect to the `pagila` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/pagila ``` 5. Find the top 10 most popular film categories based on rental frequency: ```sql SELECT c.name AS category_name, COUNT(r.rental_id) AS rental_count FROM category c JOIN film_category fc ON c.category_id = fc.category_id JOIN inventory i ON fc.film_id = i.film_id JOIN rental r ON i.inventory_id = r.inventory_id GROUP BY c.name ORDER BY rental_count DESC LIMIT 10; ``` - Source: [https://github.com/devrimgunduz/pagila](https://github.com/devrimgunduz/pagila) - License: [LICENSE.txt](https://github.com/devrimgunduz/pagila/blob/master/LICENSE.txt) - `Copyright (c) Devrim Gündüz ` ### Chinook database A sample database for a digital media store, including tables for artists, albums, media tracks, invoices, customers, and more. 1. Create a `chinook` database: ```sql CREATE DATABASE chinook; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/chinook.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/chinook" -f chinook.sql ``` 4. Connect to the `chinook` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/chinook ``` 5. Find out the most sold item by track title: ```sql SELECT T."Name" AS "Track Title", SUM(IL."Quantity") AS "Total Sold" FROM "Track" T JOIN "InvoiceLine" IL ON T."TrackId" = IL."TrackId" GROUP BY T."Name" ORDER BY "Total Sold" DESC LIMIT 1; ``` - Source: [https://github.com/lerocha/chinook-database](https://github.com/lerocha/chinook-database) - License: [LICENSE.md](https://github.com/lerocha/chinook-database/blob/master/LICENSE.md) - `Copyright (c) 2008-2017 Luis Rocha` ### Lego database A dataset containing information about various LEGO sets, their themes, parts, colors, and other associated data. 1. Create a `lego` database: ```sql CREATE DATABASE lego; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/lego.sql ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/lego" -f lego.sql ``` 4. Connect to the `lego` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/lego ``` 5. Find the top 5 LEGO themes by the number of sets: ```sql SELECT lt.name AS theme_name, COUNT(ls.set_num) AS number_of_sets FROM lego_themes lt JOIN lego_sets ls ON lt.id = ls.theme_id GROUP BY lt.name ORDER BY number_of_sets DESC LIMIT 5; ``` - Source: [https://www.kaggle.com/datasets/rtatman/lego-database](https://www.kaggle.com/datasets/rtatman/lego-database) - License: [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/) ### Employees database A dataset containing details about employees, their departments, salaries, and more. 1. Create the database and schema: ```sql CREATE DATABASE employees; \c employees CREATE SCHEMA employees; ``` 2. Download the source file: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/employees.sql.gz ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash pg_restore -d postgresql://[user]:[password]@[neon_hostname]/employees -Fc employees.sql.gz -c -v --no-owner --no-privileges ``` Database objects are created in the `employees` schema rather than the `public` schema. 4. Connect to the `employees` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/employees ``` 5. Find the top 5 departments with the highest average salary: ```sql SELECT d.dept_name, AVG(s.amount) AS average_salary FROM employees.salary s JOIN employees.department_employee de ON s.employee_id = de.employee_id JOIN employees.department d ON de.department_id = d.id WHERE s.to_date > CURRENT_DATE AND de.to_date > CURRENT_DATE GROUP BY d.dept_name ORDER BY average_salary DESC LIMIT 5; ``` - Source: The initial dataset was created by Fusheng Wang and Carlo Zaniolo from Siemens Corporate Research. Designing the relational schema was undertaken by Giuseppe Maxia while Patrick Crews was responsible for transforming the data into a format compatible with MySQL. Their work can be accessed here: [https://github.com/datacharmer/test_db](https://github.com/datacharmer/test_db). Subsequently, this information was adapted to a format suitable for PostgreSQL: [https://github.com/h8/employees-database](https://github.com/h8/employees-database). The data was generated, and there are inconsistencies. - License: This work is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported License. To view a copy of this license, visit [http://creativecommons.org/licenses/by-sa/3.0/](http://creativecommons.org/licenses/by-sa/3.0/) or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. ### Wikipedia vector embeddings An OpenAI example dataset containing pre-computed vector embeddings for 25000 Wikipedia articles. It is intended for use with the `pgvector` Postgres extension, which you must install first to create a table with `vector` type columns. For a Jupyter Notebook that uses this dataset with Neon, refer to the following GitHub repository: [neon-vector-search-openai-notebooks](https://github.com/neondatabase/neon-vector-search-openai-notebooks) 1. Download the zip file (~700MB): ```bash wget https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip ``` 2. Navigate to the directory where you downloaded the zip file, and run the following command to extract the source file: ```bash unzip vector_database_wikipedia_articles_embedded.zip ``` 3. Create a `wikipedia` database: ```sql CREATE DATABASE wikipedia; ``` 4. Connect to the `wikipedia` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/wikipedia ``` 5. Install the `pgvector` extension: ```sql CREATE EXTENSION vector; ``` 6. Create the following table in your database: ```sql CREATE TABLE IF NOT EXISTS public.articles ( id INTEGER NOT NULL PRIMARY KEY, url TEXT, title TEXT, content TEXT, title_vector vector(1536), content_vector vector(1536), vector_id INTEGER ); ``` 7. Create vector search indexes: ```sql CREATE INDEX ON public.articles USING ivfflat (content_vector) WITH (lists = 1000); CREATE INDEX ON public.articles USING ivfflat (title_vector) WITH (lists = 1000); ``` 8. Navigate to the directory where you extracted the source file, and run the following command: ```bash psql -d "postgresql://[user]:[password]@[neon_hostname]/wikipedia" -c "\COPY public.articles (id, url, title, content, title_vector, content_vector, vector_id) FROM 'vector_database_wikipedia_articles_embedded.csv' WITH (FORMAT CSV, HEADER true, DELIMITER ',');" ``` **Note**: If you encounter a memory error related to the `maintenance_work_mem` setting, refer to [Parameter settings that differ by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size) for how to increase this setting. - Source: [OpenAI](https://github.com/openai/openai-cookbook/tree/main/examples/vector_databases) - License: [MIT License](https://github.com/openai/openai-cookbook/blob/main/LICENSE) ### Postgres air database An airport database containing information about airports, aircraft, bookings, passengers, and more. 1. Download the file (1.3 GB) from [Google drive](https://drive.google.com/drive/folders/13F7M80Kf_somnjb-mTYAnh1hW1Y_g4kJ) 2. Create a `postgres_air` database: ```sql CREATE DATABASE postgres_air; ``` 3. Navigate to the directory where you downloaded the source file, and run the following command: ```bash pg_restore -d postgresql://[user]:[password]@[neon_hostname]/postgres_air -Fc postgres_air_2023.backup -c -v --no-owner --no-privileges ``` Database objects are created in a `postgres_air` schema rather than the `public` schema. 4. Connect to the `postgres_air` database: ```bash psql postgresql://[user]:[password]@[neon_hostname]/postgres_air ``` 5. Find the aircraft type with the most flights: ```sql SELECT ac.model, COUNT(f.flight_id) AS number_of_flights FROM postgres_air.aircraft ac JOIN postgres_air.flight f ON ac.code = f.aircraft_code GROUP BY ac.model ORDER BY number_of_flights DESC LIMIT 10; ``` - Source: [https://github.com/hettie-d/postgres_air](https://github.com/hettie-d/postgres_air) - License: [BSD 3-Clause License](https://github.com/hettie-d/postgres_air/blob/main/LICENSE) - `Copyright (c) 2020, hettie-d All rights reserved.` ## Load sample data with the Neon CLI You can load data with the Neon CLI by passing the `--psql` option, which calls the `psql` command line utility. The Neon CLI and `psql` must be installed on your system. For installation instructions, see: - [Neon CLI — Install and connect](https://neon.com/docs/reference/cli-install) - [PostgreSQL Downloads](https://www.postgresql.org/download/) for `psql` If you have multiple Neon projects or branches, we recommend setting your Neon CLI project and branch context so that you don't have to specify them explicitly when running a Neon CLI command. See [Neon CLI commands — set-context](https://neon.com/docs/reference/cli-set-context). To load sample data: 1. Download one of the data files listed above. For example: ```bash wget https://raw.githubusercontent.com/neondatabase/postgres-sample-dbs/main/periodic_table.sql ``` Alternatively, supply your own data file. 2. Load the data using one of the following Neon CLI commands ([projects](https://neon.com/docs/reference/cli-projects), [branches](https://neon.com/docs/reference/cli-branches), or [connection-string](https://neon.com/docs/reference/cli-connection-string)): - Create a new Neon project, connect to it with `psql`, and run the `.sql` file. ```bash neon projects create --psql -- -f periodic_table.sql ``` - Create a branch, connect to it with `psql`, and run the an `.sql` file. ```bash neon branches create --psql -- -f periodic_table.sql ``` - Get a connection string, connect with `psql`, and run the `.sql` file. ```bash neon connection-string --psql -- -f periodic_table.sql ``` --- # Source: https://neon.com/llms/import-migrate-aws-dms.txt # Migrate with AWS Database Migration Service (DMS) > The document outlines the process for migrating databases to Neon using AWS Database Migration Service (DMS), detailing the necessary steps and configurations specific to Neon's platform. ## Source - [Migrate with AWS Database Migration Service (DMS) HTML](https://neon.com/docs/import/migrate-aws-dms): The original HTML version of this documentation This guide outlines the steps for using the AWS Database Migration Service (DMS) to migrate data to Neon from another hosted database server. AWS DMS supports a variety of database migration sources including PostgreSQL, MySQL, Oracle, and Microsoft SQL Server. For a complete list of data migration sources supported by AWS DMS, see [Source endpoints for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html#CHAP_Introduction.Sources.DataMigration). For additional information about particular steps in the migration process, refer to the [official AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html). If you are not familiar with AWS DMS, we recommend stepping through the [Getting started with AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) tutorial. If you encounter problems with AWS DMS that are not related to defining Neon as a data migration target endpoint, please contact [AWS Customer Support](https://aws.amazon.com/contact-us/). This guide uses the [AWS DMS sample Postgres database](https://github.com/aws-samples/aws-database-migration-samples/blob/master/PostgreSQL/sampledb/v1/README.md) for which the schema name is `dms_sample`. ## Before you begin Complete the following steps before you begin: - Create a [replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html) in AWS DMS. - Configure a [source database endpoint](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html) in AWS DMS. - Set up a Neon project and a target database. See [Create a project](https://neon.com/docs/manage/projects#create-a-project), and [Create a database](https://neon.com/docs/manage/databases#delete-a-database) for instructions. - If you are migrating from a database other than Postgres, use the [Schema Conversion Tool](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.SCT.html) or [DMS Schema Conversion](https://docs.aws.amazon.com/dms/latest/userguide/getting-started.html) to convert and export the schema from the source database to the target database. Perform this step after creating the target endpoint for the Neon database but before the data migration. If migrating from a Postgres database, schema conversion is not required. ## Create a target endpoint for your Neon database 1. In the AWS Console, select **Database Migration Service**. 2. Select **Endpoints** from the sidebar. 3. Click **Create endpoint**. 4. Select **Target endpoint** as the **Endpoint type**. 5. Provide an **Endpoint identifier** label for your new target endpoint. In this guide, we use `neon` as the identifier. 6. In the **Target engine** drop-down menu, select `PostgreSQL`. 7. Under **Access to endpoint database**, select **Provide access information manually** and enter the information outlined below. You can obtain the connection details from your Neon connection string, which you can find by clicking the **Connect** button on your Neon **Project Dashboard**. Your connection string will look similar to this: `postgresql://daniel:AbC123dEf@ep-curly-term-54009904.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require"`. - **Server name**: Specify your Neon hostname, which is this portion of your connection string: `ep-curly-term-54009904.us-east-2.aws.neon.tech` - **Port**: `5432` - **User name**: Specify the Neon user. - **Password**: Specify the password in the following format: `endpoint=[endpoint_id]$[password]`, which looks similar to this when defined: ```text endpoint=ep-curly-term-54009904$AbC123dEf ``` You can obtain the `endpoint_id` and password from your Neon connection string. The `endpoint_id` appears similar to this: `ep-curly-term-54009904`. For information about why this password format is required, see [Connection errors](https://neon.com/docs/connect/connection-errors#the-endpoint-id-is-not-specified). AWS DMS requires the [Option D workaround](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field). - **Secure Sockets Layer (SSL) mode**: Select `require`. - **Database name**: The name of your Neon database. In this example, we use a database named `neondb` When finished, your target endpoint configuration should look similar to this: 8. Under **Test endpoint connection (optional)**, click **Run test** to test the connection. Running the test creates the endpoint and attempts to connect to it. If the connection fails, you can edit the endpoint definition and test the connection again. 9. Select **Create endpoint**. ## Create a database migration task A database migration task defines the data to be migrated from the source database to the target database. 1. In AWS DMS, select **Database migration tasks** from the sidebar. 2. Select **Create task** to open a **Create database migration task** page. 3. Enter a **Task identifier** to identify the replication task. In this example, we name the identifier `dms-task`. 4. Select the **Replication instance**. In this guide, the replication instance is named `dms_instance`. 5. Select the **Source database endpoint**. In this guide, the replication instance is named `dms_postgresql`. 6. Select the **Target database endpoint**. In this guide, the target database endpoint identifier is `neon`. 7. Select a **Migration type**. In this example, we use the default `Migrate existing data` type. ### Task settings Specify the following task settings: 1. For **Editing mode**, select **Wizard**. 2. For Target table preparation mode, select **Do nothing**. This option means that AWS DMS only creates tables in the target database if they do not exist. 3. For the **LOB column** setting, select **Don't include LOB columns**. Neon does not support LOB columns. 4. Optionally, under **Validation**, check **Turn on** to compare the data after the load operation finishes to ensure that data was migrated accurately. For more information about validation, refer to the [AWS data validation documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html). You can also check **Enable CloudWatch logs** and set **Target Load** to **Debug** or **Detailed debug** to log information during the migration process. This data is useful for troubleshooting migration issues. ### Table mappings Configure the table mapping: 1. For **Editing mode**, select **Wizard**. 2. Under **Selection rules**, click **Add new selection rule**. 3. For **Schema**, select **Enter a schema**. 4. For **Source name**, enter the name of your database schema. In this guide, `dms_sample` is specified as the schema name, which is the schema for the sample database. The `dms_sample` schema will be created in your Neon database, and all database objects will be created in the schema. 5. For the **Source table name**, leave the `%` wildcard character to load all tables in the schema. 6. For **Action**, select **Include** to migrate the objects specified by your selection rule. ### Migration task startup configuration 1. Under **\*Migration task startup configuration**, select **Automatically on create**. 2. Click **Start migration task** at the bottom of the page. The data migration task is created, and the data migration operation is initiated. You can monitor operation progress on the AWS DMS **Database migrations tasks** page. ## Verify the migration in Neon To verify that data was migrated to your Neon database: 1. In the Neon Console, select your Neon project. 2. Select **Tables** from the side bar. 3. Select the **Branch**, **Database**, and **Schema** where you imported the data. . ## Migration notes This section contains notes from our experience using AWS DMS to migrate data to Neon from an RDS Postgres database. - When testing migration steps, the [Getting started with AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) tutorial was our primary reference. As recommended in the tutorial, we created a VPC and created all resources within the VPC. - We created all resources in the same region (`us-east-2a`) - We created an RDS PostgreSQL 15 database called `dms_sample` as the source database. The Neon target database was also Postgres 15. - We populated the RDS PostgreSQL source database using the [AWS DMS sample Postgres database](https://github.com/aws-samples/aws-database-migration-samples/blob/master/PostgreSQL/sampledb/v1/README.md). To do this, we created an EC2 instance to connect to the database following the steps in this topic: [Create an Amazon EC2 Client](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.Prerequisites.html#CHAP_GettingStarted.Prerequisites.client). - The source database was populated using this `psql` command: ```bash psql -h dms-postgresql.abc123def456hgi.us-east-2.rds.amazonaws.com -p 5432 -U postgres -d dms_sample -a -f ~/aws-database-migration-samples/PostgreSQL/sampledb/v1/postgresql.sql ``` - To verify that data was loaded in the source database, we connected using the following `psql` command and ran a `SELECT` query: ```bash psql \ --host=dms-postgresql.abc123def456hgi.us-east-2.rds.amazonaws.com \ --port=5432 \ --username=postgres \ --password \ --dbname=dms_sample dms_sample=> SELECT * from dms_sample.player LIMIT 100; ``` - When creating the source database endpoint for the RDS Postgres 15 database, we set **Secure Socket Layer (SSL) mode** to `require`. Without this setting, we encountered the following error: ```text Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: FATAL: no pg_hba.conf entry for host "10.0.1.135", user "postgres", database "dms_sample", no encryption ``` - When creating the target database endpoint for the Neon database, we encountered the following error when testing the connection: ```text Endpoint failed: Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: timeout expired ``` The replication instance, which was created in the private subnet where the source database resided, could not access the Neon database, which resides outside of the VPC. To allow the replication instance to access the Neon database, we added a NAT Gateway to the public subnet, allocated an Elastic IP address, and modified the **Route Table** associated with the private subnet to add a route via the NAT Gateway. --- # Source: https://neon.com/llms/import-migrate-from-azure-native.txt # Migrate from Neon Azure Native Integration > This document guides Neon users through the process of migrating their databases from Azure Native Integration to Neon, detailing steps for data export, configuration, and import procedures. ## Source - [Migrate from Neon Azure Native Integration HTML](https://neon.com/docs/import/migrate-from-azure-native): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Transfer your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. This guide describes how to transfer your projects to your Neon-managed organization to continue using Neon. ## Getting started Before you begin your migration, be aware of the following: - You may have more than one organization in the Neon Console. [Review your organizations](https://neon.com/docs/import/migrate-from-azure-native#identify-your-organizations) to identify which one is Azure-managed and which is Neon-managed. - Admins from your Azure-managed organization are automatically added to the Neon-managed organization. Members and project collaborators must be re-added manually after migration. - If you are on a paid Azure plan, your Neon-managed organization is on the Free plan. You must upgrade to a paid plan (Scale recommended for Azure Scale and Business customers) or create a new paid organization to maintain your current features and avoid service limitations. - Application connection strings remain the same after transfer because the project structure does not change. - You can [rename an organization](https://neon.com/docs/manage/orgs-manage#rename-an-organization) at any time. To transfer your projects to a Neon-managed organization: ## Identify your organizations 1. Sign in to the [Neon Console](https://console.neon.tech). 2. Open the organization dropdown to view your available organizations. 3. Determine which organization is Azure-managed and which is Neon-managed. - The Azure-managed organization will have the same name as the resource shown in the Azure Portal. - In **Organization Settings** → **Delete**, the Azure-managed organization includes a note that says: "This organization is managed by Azure and can be deleted only from the Azure Portal." A Neon-managed organization will not have this note. > From the Neon Console, you'll be migrating from the Azure-managed organization to the Neon-managed organization. ## Choose your destination organization If you have multiple admins, coordinate to decide which Neon-managed organization will be your shared destination: - **Free plan** users can use their existing Neon-managed organization, or upgrade to a paid plan to create an additional organization (you can only have one free organization). - **Paid plan** users can upgrade their existing Neon-managed organization, or create a new paid organization. ## Upgrade your Neon plan (paid users only) If you are on a paid Azure plan, you can either upgrade your existing Neon-managed organization or create a new paid organization: **To upgrade your existing Neon-managed organization:** 1. Switch to your Neon-managed organization. 2. Go to **Billing** and select **Change plan**. 3. Select **Scale** (recommended for Azure Scale and Business customers) or **Launch**. 4. Complete the upgrade process. **To create a new paid organization:** 1. From the organization menu, select **Create new organization**. 2. Select **Scale** or **Launch** as your plan. 3. Complete the organization setup. This ensures your projects retain all paid features after transfer. ## Transfer your projects You can transfer all projects at once or individually. From your Azure-managed organization in the Neon Console: 1. Go to **Organization** → **Settings** → **Transfer projects**. 2. Click **Select all**, then click **Next**. 3. Choose your Neon-managed organization as the destination. 4. Confirm the transfer. Projects appear in your destination organization immediately after the transfer completes. For more details about project transfers, see [Transfer projects](https://neon.com/docs/manage/orgs-project-transfer). ## Update your organization configuration After the transfer: - Re-add any additional admins, members, or project collaborators who need access. See [Manage organization members](https://neon.com/docs/manage/orgs-manage#add-a-user-to-an-organization) for instructions. - Verify that all projects appear in your Neon-managed organization. - For [API keys](https://neon.com/docs/manage/api-keys), project-scoped and personal API keys remain the same after transfer. Organization API keys are tied to your Azure-managed organization, so if you use organization API keys, create new keys in your Neon-managed organization and update them in your applications, scripts, and integrations. - Update any integrations or tooling that rely on organization-level identifiers. ## Delete your Azure-managed resource **Important**: Only delete your Azure resource after confirming all projects have been transferred. Deleting the Azure resource before transferring projects will permanently delete all projects and data in your Azure-managed organization. 1. Sign in to the [Azure Portal](https://portal.azure.com). 2. Select your Neon resource created through the Azure Marketplace. 3. Confirm that no projects remain in your Azure-managed organization. 4. On the **Overview** page, select **Delete**. 5. Confirm the deletion by entering the resource's name. 6. Choose the reason for deleting the resource. 7. Select **Delete** to finalize. Deleting the resource stops all Azure Marketplace billing and completes your transition to a Neon-managed organization. ## After you migrate Your projects are now managed directly in the Neon Console. All connection strings and project configurations remain the same. You can now manage billing, upgrades, and support directly through Neon. If you need help, contact Neon Support through the Neon Console or visit the [support documentation](https://neon.com/docs/introduction/support). --- # Source: https://neon.com/llms/import-migrate-from-azure-postgres.txt # Migrate from Azure PostgreSQL to Neon > The document outlines the process for migrating databases from Azure PostgreSQL to Neon, detailing steps for exporting data from Azure and importing it into Neon using the pg_dump and pg_restore tools. ## Source - [Migrate from Azure PostgreSQL to Neon HTML](https://neon.com/docs/import/migrate-from-azure-postgres): The original HTML version of this documentation This guide describes how to migrate your database from Azure Database for PostgreSQL to Neon, using logical replication. Logical replication for Postgres transfers data from a source Postgres database to another, as a stream of tuples (records) or SQL statements. This allows for minimal downtime during the migration process, since all the records don't need to be copied at once. ## Prerequisites - An Azure Database for PostgreSQL instance containing the data you want to migrate. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). Make sure to create a project with the same Postgres version as your Azure PostgreSQL deployment. - Read the [important notices about logical replication in Neon](https://neon.com/docs/guides/logical-replication-neon#important-notices) before you begin. - Review our [logical replication tips](https://neon.com/docs/guides/logical-replication-tips), based on real-world customer data migration experiences. ## Prepare your Azure PostgreSQL database This section describes how to prepare your Azure PostgreSQL database (the publisher) for replicating data to your destination Neon database (the subscriber). To illustrate the migration workflow, we set up the [AdventureWorks sample database](https://wiki.postgresql.org/wiki/Sample_Databases) on an Azure Database for PostgreSQL deployment. This database contains data corresponding to a fictionaly bicycle parts company, organized across 5 schemas and almost 70 tables. ### Enable logical replication in Azure PostgreSQL 1. Navigate to your Azure Database for PostgreSQL instance in the Azure portal. 2. From the left sidebar, select **Server parameters** under the **Settings** section. 3. Search for the `wal_level` parameter and set its value to `LOGICAL`. 4. Click **Save** to apply the changes. **Note**: Changing the `wal_level` parameter on Azure requires a server restart. This may cause a brief interruption to your database service. ### Create a PostgreSQL role for replication It is recommended that you create a dedicated Postgres role for replicating data. Connect to your Azure PostgreSQL database using a tool like [psql](https://www.postgresql.org/docs/current/app-psql.html) or [Azure Data Studio](https://learn.microsoft.com/en-us/azure-data-studio/?view=sql-server-ver15), then create a new role with `REPLICATION` privileges: ```sql CREATE ROLE replication_user WITH REPLICATION LOGIN PASSWORD 'your_secure_password'; ``` ### Grant schema access to your PostgreSQL role Grant the necessary permissions to your replication role. For example, the following commands grant access to all tables in the `sales` schema to Postgres role `replication_user`: ```sql GRANT USAGE ON SCHEMA sales TO replication_user; GRANT SELECT ON ALL TABLES IN SCHEMA sales TO replication_user; ALTER DEFAULT PRIVILEGES IN SCHEMA sales GRANT SELECT ON TABLES TO replication_user; ``` Granting `SELECT ON ALL TABLES IN SCHEMA` instead of naming the specific tables avoids having to add privileges later if you add tables to your publication. If you have data split across multiple schemas, you can run a similar command for each schema, or use a PL/pgSQL function to dynamically grant access to all schemas in the database. ```sql -- Thanks to this Stackoverflow answer - https://dba.stackexchange.com/a/241266 DO $do$ DECLARE sch text; BEGIN FOR sch IN SELECT nspname FROM pg_namespace where -- Exclude system schemas nspname != 'pg_toast' and nspname != 'pg_temp_1' and nspname != 'pg_toast_temp_1' and nspname != 'pg_statistic' and nspname != 'pg_catalog' and nspname != 'information_schema' LOOP EXECUTE format($$ GRANT USAGE ON SCHEMA %I TO replication_user $$, sch); EXECUTE format($$ GRANT SELECT ON ALL TABLES IN SCHEMA %I TO replication_user $$, sch); EXECUTE format($$ ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT SELECT ON TABLES TO replication_user $$, sch); END LOOP; END; $do$; ``` ### Create a publication on the source database Publications are a fundamental part of logical replication in Postgres. They define what will be replicated. The following commands examples create publication named `azure_publication` with one or more tables. To create a publication for a specific table: ```sql CREATE PUBLICATION azure_publication FOR ; ``` To create a publication for multiple tables, provide a comma-separated list of tables: ```sql CREATE PUBLICATION azure_publication FOR TABLE ; ``` **Note**: Defining specific tables lets you add or remove tables from the publication later, which you cannot do when creating publications with `FOR ALL TABLES`. For syntax details, see [CREATE PUBLICATION](https://www.postgresql.org/docs/current/sql-createpublication.html), in the PostgreSQL documentation. ### Allow inbound traffic from Neon You need to allow inbound traffic from Neon Postgres servers so it can connect to your Azure database. To do this, follow these steps: 1. Log into the Azure portal and navigate to your Azure Postgres Server resource. 2. Click on the **Networking** option under the `Settings` section in the sidebar. Navigate to the **Firewall Rules** section under the `Public access` tab. 3. Click on `Add a Firewall Rule`, which generates a modal to add the range of IP addresses from which we want to allow connections. You will need to perform this step for each of the NAT gateway IP addresses associated with your Neon project's region. For each IP address, create a new rule and fill both the `Start IP` and `End IP` fields with the IP address. Neon uses 3 to 6 IP addresses per region for this outbound communication, corresponding to each availability zone in the region. See [NAT Gateway IP addresses](https://neon.com/docs/introduction/regions#nat-gateway-ip-addresses) for Neon's NAT gateway IP addresses. 4. To fetch the database schema using `pg_dump`, you also need to allow inbound traffic from your local machine (or where you are running `pg_dump`) so it can connect to your Azure database. Add another firewall rule entry with that IP address as the start and end IP address. 5. CLick `Save` at the bottom to make sure all changes are saved. ## Prepare your Neon destination database This section describes how to prepare your destination Neon PostgreSQL database (the subscriber) to receive replicated data. You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. See [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ### Create the Neon database To keep parity with the Azure PostgreSQL deployment, create a new database with the same name. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for more information. For this example, we run the following query to create a new database named `AdventureWorks` in the Neon project. ```sql CREATE DATABASE "AdventureWorks"; ``` ### Import the database schema To ensure that the Neon `AdventureWorks` database has the same schema as the Azure PostgreSQL database, we'll need to import the schema. You can use the `pg_dump` utility to export the schema and then `psql` to import it into Neon. 1. Export the schema from Azure PostgreSQL: ```shell pg_dump --schema-only --no-owner --no-privileges -h -U -d > schema.sql ``` 2. Import the schema into your Neon database: ```shell psql < schema.sql ``` ### Create a subscription After importing the schema, create a subscription on the Neon database: 1. Use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor), [psql](https://neon.com/docs/connect/query-with-psql-editor), or another SQL client to connect to your Neon database. 2. Create the subscription using the `CREATE SUBSCRIPTION` statement: ```sql CREATE SUBSCRIPTION neon_subscription CONNECTION 'host= port=5432 dbname= user=replication_user password=your_secure_password' PUBLICATION azure_publication; ``` 3. Verify that the subscription was created by running the following query, and confirming that the subscription (`neon_subscription`) is listed: ```sql SELECT * FROM pg_stat_subscription; ``` ## Monitor and verify the replication To ensure that data is being replicated correctly: 1. Monitor the replication status on Neon, by running the following query: ```sql SELECT * FROM pg_stat_subscription; ``` This query should return an output similar to the following: ```text subid | subname | pid | leader_pid | relid | received_lsn | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end_time -------+-------------------+-----+------------+-------+--------------+-------------------------------+-------------------------------+----------------+------------------------------- 24576 | neon_subscription | 540 | | | 1/3D0020A8 | 2024-09-11 11:34:24.841807+00 | 2024-09-11 11:34:24.869991+00 | 1/3D0020A8 | 2024-09-11 11:34:24.841807+00 (1 row) ``` - An active `pid` indicates that the subscription is active and running. - The `received_lsn` and `latest_end_lsn` columns show the LSN (Log Sequence Number) of the last received (at Neon) and last written data (at Azure source), respectively. - In this example, they have the same value, which means that all the data has been successfully replicated from Azure to Neon. 2. To verify that the data has been replicated correctly, compare row counts between Azure PostgreSQL and Neon for some key tables. For example, you can run the following query to check the number of rows in the `addresses` table: ```sql SELECT COUNT(*) FROM person.address; ``` It returns the same output on both databases: ```text count ------- 19614 (1 row) ``` 3. Optionally, you can run some queries from your application against the Neon database to verify that it returns the same output as the Azure instance. ## Complete the migration Once the initial data sync is complete and you've verified that ongoing changes are being replicated: 1. Stop writes to your Azure PostgreSQL database. 2. Wait for any final transactions to be replicated to Neon. 3. Update your application's connection string to point to your Neon database. This ensures a much shorter downtime for the application, as you only need to wait for the last few transactions to be replicated before switching the application over to the Neon database. **Note**: Remember to update any Azure-specific configurations or extensions in your application code to be compatible with Neon. For Neon Postgres parameter settings, see [Postgres parameter settings](https://neon.com/docs/reference/compatibility#postgres-parameter-settings). For Postgres extensions supported by Neon, see [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions). ## Clean up After successfully migrating and verifying your data on Neon, you can: 1. Drop the subscription on the Neon database: ```sql DROP SUBSCRIPTION neon_subscription; ``` 2. Remove the publication from the Azure PostgreSQL database: ```sql DROP PUBLICATION azure_publication; ``` 3. Consider backing up your Azure PostgreSQL database before decommissioning it. ## Other migration options This section discusses migration options other than using logical replication. - **pg_dump and pg_restore** If your database size is not large, you can use the `pg_dump` utility to create a dump file of your database, and then use `pg_restore` to restore the dump file to Neon. Please refer to the [Migrate from Postgres](https://neon.com/docs/import/migrate-from-postgres) guide for more information on this method. - **Postgres GUI clients** Some Postgres clients offer backup and restore capabilities. These include [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/backup_and_restore.html) and [phppgadmin](https://github.com/phppgadmin/phppgadmin/releases), among others. We have not tested migrations using these clients, but if you are uncomfortable using command-line utilities, they may provide an alternative. - **Table-level data migration using CSV files** Table-level data migration (using CSV files, for example) does not preserve database schemas, constraints, indexes, types, or other database features. You will have to create these separately. Table-level migration is simple but could result in significant downtime depending on the size of your data and the number of tables. For instructions, see [Import data from CSV](https://neon.com/docs/import/import-from-csv). ## Reference For more information about logical replication and Postgres client utilities, refer to the following topics in the Postgres and Neon documentation: - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) - [psql](https://www.postgresql.org/docs/current/app-psql.html) - [Postgres - Logical replication](https://www.postgresql.org/docs/current/logical-replication.html) - [Neon logical replication guide](https://neon.com/docs/guides/logical-replication-guide) --- # Source: https://neon.com/llms/import-migrate-from-digital-ocean.txt # Migrate from Digital Ocean Postgres to Neon > The document outlines the process for migrating a PostgreSQL database from Digital Ocean to Neon, detailing steps for exporting data, configuring Neon's settings, and importing the database to ensure a seamless transition. ## Source - [Migrate from Digital Ocean Postgres to Neon HTML](https://neon.com/docs/import/migrate-from-digital-ocean): The original HTML version of this documentation This guide describes how to migrate a Postgres database from Digital Ocean to Neon using the `pg_dump` and `pg_restore` utilities, which are part of the Postgres client toolset. `pg_dump` works by dumping both the schema and data in a custom format that is compressed and suitable for input into `pg_restore` to rebuild the database. ## Prerequisites - A Digital Ocean Postgres database containing the data you want to migrate. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). Make sure to create a project with the same Postgres version as your Digital Ocean deployment. - `pg_dump` and `pg_restore` utilities installed on your local machine. These typically come with a Postgres installation. We recommended that you use the `pg_dump` and `pg_restore` programs from the latest version of Postgres, to take advantage of enhancements that might have been made in these programs. To check the version of `pg_dump` or `pg_restore`, use the `-V` option. For example: `pg_dump -V`. - Review our guide on [Importing data from Postgres](https://neon.com/docs/import/migrate-from-postgres) for more comprehensive information on using `pg_dump` and `pg_restore`. ## Prepare your Digital Ocean database This section describes how to prepare your Digital Ocean database for exporting data. To illustrate the migration workflow, we populate the Digital Ocean database with the [LEGO dataset](https://neon.com/docs/import/import-sample-data#lego-database). This database contains information about LEGO sets, parts, and themes. ### Retrieve Digital Ocean connection details 1. Log in to your Digital Ocean account and navigate to the Databases section. 2. Select your Postgres database. 3. In the **Connection Details** section under the **Overview** tab, you'll find the following information: - Host - Port - Database name - Username - Password (you may need to reset it if you don't have it) You'll need these details to construct the connection string for `pg_dump`. Alternatively, you can toggle to the `Connection string` option to get the `postgresql://` connection string, which can be used directly with postgres CLI tools. ## Export data with pg_dump Now that you have the Digital Ocean connection details, you can export your data using `pg_dump`: ```bash pg_dump -Fc -v -d postgresql://[username]:[password]@[host]:[port]/[database] -f digitalocean_dump.bak ``` Replace `[username]`, `[password]`, `[host]`, `[port]`, and `[database]` with your Digital Ocean connection details. This command includes these arguments: - `-Fc`: Outputs the dump in custom format, which is compressed and suitable for input into `pg_restore`. - `-v`: Runs `pg_dump` in verbose mode, allowing you to monitor the dump operation. - `-d`: Specifies the connection string for your Digital Ocean database. - `-f`: Specifies the output file name. If the command was successful, you'll see output similar to the following: ```bash pg_dump: saving encoding = UTF8 pg_dump: saving standard_conforming_strings = on pg_dump: saving search_path = pg_dump: saving database definition pg_dump: dumping contents of table "public.lego_colors" pg_dump: dumping contents of table "public.lego_inventories" pg_dump: dumping contents of table "public.lego_inventory_parts" pg_dump: dumping contents of table "public.lego_inventory_sets" pg_dump: dumping contents of table "public.lego_part_categories" pg_dump: dumping contents of table "public.lego_parts" pg_dump: dumping contents of table "public.lego_sets" pg_dump: dumping contents of table "public.lego_themes" ``` **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. ## Prepare your Neon destination database This section describes how to prepare your destination Neon Postgres database to receive the imported data. ### Create the Neon database Each Neon project comes with a default database named `neondb`. To maintain consistency with your Digital Ocean setup, create a new database with the same name. 1. Connect to your Neon project using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a Postgres client like `psql`. 2. Create a new database. For example, if your Digital Ocean database was named `lego`, run: ```sql CREATE DATABASE lego; ``` For more information, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). ### Retrieve Neon connection details 1. In the Neon Console, go to your **Project Dashboard**. 2. Click **Connect** to open the **Connect to your database** modal. 3. Copy the connection string. It will look similar to this: ``` postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## Restore data to Neon with pg_restore Now you can restore your data to the Neon database using `pg_restore`: ```bash pg_restore -d -v --no-owner --no-acl digitalocean_dump.bak ``` Replace `` with your Neon connection details. This command includes these arguments: - `-d`: Specifies the connection string for your Neon database. - `-v`: Runs `pg_restore` in verbose mode. - `--no-owner`: Skips setting the ownership of objects as in the original database. - `--no-acl`: Skips restoring access privileges for objects as in the original database. We recommend using the `--no-owner` and `--no-acl` options to skip restoring these settings, as they may not be compatible between Digital Ocean and Neon. After migrating the data, review and configure the appropriate roles and privileges for all objects, as needed. If the command was successful, you'll see output similar to the following: ```bash pg_restore: connecting to database for restore pg_restore: creating SCHEMA "public" pg_restore: creating TABLE "public.lego_colors" pg_restore: creating SEQUENCE "public.lego_colors_id_seq" pg_restore: creating SEQUENCE OWNED BY "public.lego_colors_id_seq" pg_restore: creating TABLE "public.lego_inventories" pg_restore: creating SEQUENCE "public.lego_inventories_id_seq" ... ``` ## Verify the migration After the restore process completes, you should verify that your data has been successfully migrated: 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or `psql`. 2. Run some application queries to check your data. For example, if you're using the `LEGO` database, you can run the following: ```sql SELECT is_trans AS is_transparent, COUNT(*) FROM lego_colors GROUP BY is_trans; SELECT * FROM lego_sets ORDER BY num_parts DESC LIMIT 5; ``` 3. Compare the results with those from running the same queries on your Digital Ocean database to ensure data integrity. ## Clean up After successfully migrating and verifying your data on Neon, you can update your application's connection strings to point to your new Neon database. We recommend that you keep your Digital Ocean database dump file (`digitalocean_dump.bak`) as a backup until you've verified that the migration was successful. ## Other migration options While this guide focuses on using `pg_dump` and `pg_restore`, there are other migration options available: - **Logical replication** For larger databases or scenarios where you need to minimize downtime, you might consider using logical replication. See our guide on [Logical replication](https://neon.com/docs/guides/logical-replication-guide) for more information. - **CSV export/import** For smaller datasets or specific tables, you might consider exporting to CSV from Digital Ocean and then importing to Neon. See [Import data from CSV](https://neon.com/docs/import/import-from-csv) for more details on this method. ## Reference For more information on the Postgres utilities used in this guide, refer to the following documentation: - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) - [Migrating data to Neon](https://neon.com/docs/import/migrate-from-postgres) --- # Source: https://neon.com/llms/import-migrate-from-firebase.txt # Migrate from Firebase Firestore to Neon Postgres > The document outlines the process for migrating data from Firebase Firestore to Neon Postgres, detailing steps for exporting data from Firestore and importing it into Neon, ensuring a seamless transition between the two databases. ## Source - [Migrate from Firebase Firestore to Neon Postgres HTML](https://neon.com/docs/import/migrate-from-firebase): The original HTML version of this documentation This guide describes how to migrate data from Firebase Firestore to Neon Postgres. We'll use a custom Python script to export data from Firestore to a local file, and then import the data into Neon Postgres. This approach allows us to handle Firestore's document-based structure and convert it into the relational database format suitable for Postgres. ## Prerequisites - A Firebase project containing the Firestore data you want to migrate. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Python 3.10 or later installed on your local machine. Additionally, add the following packages to your Python virtual environment: `firebase_admin`, which is Google's python SDK for Firebase and `psycopg`, which is used to connect to Neon Postgres database. You can install them using `pip`: ```bash pip install firebase-admin "psycopg[binary,pool]" ``` ## Retrieve Firebase credentials This section describes how to fetch the credentials to connect to your Firebase Firestore database. 1. Log in to your Firebase Console and navigate to your project. 2. Go to **Project settings** (the gear icon next to "Project Overview" in the left sidebar). 3. Under the **Service Accounts** tab, click **Generate new private key**. This will download a JSON file containing your credentials. 4. Save this JSON file securely on your local machine. We'll use it in our Python script. For more information, please consult the [Firebase documentation](https://firebase.google.com/docs/admin/setup#initialize_the_sdk_in_non-google_environments). ## Export data from Firestore In this step, we will use a Python script to export data from Firestore. This script will: 1. Connect to Firestore 2. Retrieve all collections and documents 3. Save the Firestore documents to a format suitable for ingesting into Postgres later Here's the Python script: ```python import argparse import json import os from collections import defaultdict import firebase_admin from firebase_admin import credentials, firestore def download_from_firebase(db, output_dir): # Create output directory if it doesn't exist if not os.path.exists(output_dir): os.makedirs(output_dir) # Initialize a defaultdict to store documents for each collection output: dict[str, list[dict]] = defaultdict(list) def _download_collection(collection_ref): print(f"Downloading from collection: {collection_ref.id}") # Determine the parent path for the current collection if collection_ref.parent: parent_path = collection_ref.parent.path else: parent_path = None # Iterate through all documents in the collection for doc in collection_ref.get(): # Add document data to the output dictionary output[collection_ref.id].append( { "id": doc.reference.path, "parent_id": parent_path, "data": doc.to_dict(), } ) # Recursively handle subcollections for subcoll in doc.reference.collections(): _download_collection(subcoll) # Start the download process with top-level collections for collection in db.collections(): _download_collection(collection) # Save all (sub)collections to corresponding files for collection_id, docs in output.items(): with open(os.path.join(output_dir, f"{collection_id}.json"), "w") as f: for doc in docs: f.write(json.dumps(doc) + "\n") def main(): parser = argparse.ArgumentParser( description="Download data from Firebase Firestore" ) parser.add_argument( "--credentials", required=True, help="Path to Firebase credentials JSON file" ) parser.add_argument( "--output", default="firestore_data", help="Output directory for downloaded data", ) args = parser.parse_args() # Initialize Firebase app cred = credentials.Certificate(args.credentials) firebase_admin.initialize_app(cred) db = firestore.client() # Download data from Firebase download_from_firebase(db, args.output) print(f"Firestore data downloaded to {args.output}") if __name__ == "__main__": main() ``` Save this script as `firebase-download.py`. To run the script, you need to provide the path to your Firebase credentials JSON file and the output directory for the downloaded data. Run the following command in your terminal: ```bash python firebase-download.py --credentials path/to/your/firebase-credentials.json --output firestore_data ``` For each unique collection id, this script creates a line-delimited JSON file, and all documents in that collection (spanning different top-level documents) are saved to it. For example, if you have a collection with the following structure: ``` /users /user1 /orders /order1 /order2 /items /item1 /item2 /user2 /orders /order3 ``` The script will create the following files: - `users.json`: Contains all user documents, i.e., `user1`, `user2`. - `orders.json`: Contains all order documents across all users - `order1`, `order2`, `order3`. - `items.json`: Contains all item documents across all orders - `item1`, `item2`. Each file contains a JSON object for each document. To illustrate, `order1` gets saved to `orders.json` in the following format: ```json { "id": "users/user1/orders/order1", "parent_id": "users/user1", "data": { "order_date": "2023-06-15", "total_amount": 99.99 } } ``` This structure allows for easy reconstruction of the hierarchical relationships between users, orders, and items, while also providing a flat file structure that's easy to process and import into other systems. ## Prepare your Neon destination database This section describes how to prepare your destination Neon Postgres database to receive the imported data. ### Create the Neon database 1. In the Neon Console, go to your project dashboard. 2. In the sidebar, click on **Databases**. 3. Click the **New Database** button. 4. Enter a name for your database and click **Create**. For more information, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). ### Retrieve Neon connection details 1. In the Neon Console, go to your project dashboard. 2. Click **Connect** to open the **Connect to your database** modal, and select your database. 3. Copy the connection string. It will look similar to this: ``` postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## Import data into Neon We use another python script to import the firestore data we previously downloaded into Neon. ```python import argparse import json import os import psycopg from psycopg.types.json import Jsonb def upload_to_postgres(input_dir, conn_string): # Connect to the Postgres database conn = psycopg.connect(conn_string) # Iterate through all JSON files in the input directory for filename in os.listdir(input_dir): cur = conn.cursor() if filename.endswith(".json"): table_name = filename[:-5] # Remove .json extension print("Writing to table: ", table_name) # Create table for the collection if it doesn't exist create_table_query = f""" CREATE TABLE IF NOT EXISTS {table_name} ( id TEXT PRIMARY KEY, parent_id TEXT, data JSONB ) """ cur.execute(create_table_query) # Read and insert data from the JSON file with open(os.path.join(input_dir, filename), "r") as f: insert_query = f""" INSERT INTO {table_name} (id, parent_id, data) VALUES (%s, %s, %s) ON CONFLICT (id) DO UPDATE SET parent_id = EXCLUDED.parent_id, data = EXCLUDED.data """ batch = [] for line in f: doc = json.loads(line) batch.append((doc["id"], doc["parent_id"], Jsonb(doc["data"]))) if len(batch) == 20: cur.executemany(insert_query, batch) batch = [] # Commit changes conn.commit() # Close the cursor and connection cur.close() conn.close() def main(): parser = argparse.ArgumentParser(description="Upload data to Postgres") parser.add_argument( "--input", default="firestore_data", help="Input directory containing JSON files", ) parser.add_argument("--postgres", required=True, help="Postgres connection string") args = parser.parse_args() # Upload data to Postgres upload_to_postgres(args.input, args.postgres) print(f"Data from {args.input} uploaded to Postgres") if __name__ == "__main__": main() ``` Save this script as `neon-import.py`. To run the script, you need to provide the path to the input directory containing the JSON files and the Neon connection string. Run the following command in your terminal: ```bash python neon-import.py --input firestore_data --postgres "" ``` This script iterates over each JSON file in the input directory, creates a table in the Neon database for each collection, and inserts the data into the table. It also handles conflicts by updating the existing data with the new data. ## Verify the migration After running both the Firestore export and the Neon import scripts, you should verify that your data has been successfully migrated: 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or `psql`. 2. List all tables in your database: ```sql \dt ``` 3. Run some sample queries to check that the data has been successfully imported. For example, the following query fetches all orders made by the first two customers: ```sql SELECT data FROM orders WHERE parent_id IN ( SELECT id FROM customers LIMIT 2 ) ``` Compare the results with those from your Firestore database to ensure data integrity. Note that using the `parent_id` field, we can navigate through the hierarchical structure of the original data. ## Other migration options While this guide focuses on using a custom Python script, there are other migration options available: - **Firestore managed export/import** If you have a large volume of data to migrate, you can use the [Google Cloud Firestore managed export and import service](https://firebase.google.com/docs/firestore/manage-data/export-import). It allows you to export your Firestore data to a Google Cloud Storage bucket, from where you can download and ingest it into Neon. - **Open source utilities** There are also a number of open source utilities available that can help export data from Firestore to local files. - [firestore-import-export](https://github.com/dalenguyen/firestore-import-export) - [firestore-backup-restore](https://github.com/dalenguyen/firestore-backup-restore) However, these utilities are not as robust as the managed export/import service. If your data size is not big, we recommend using the sample code provided above or adapting it to your specific needs. ## Reference For more information on the tools and libraries used in this guide, refer to the following documentation: - [Migrating data to Neon](https://neon.com/docs/import/migrate-intro) - [Firebase Admin SDK](https://firebase.google.com/docs/admin/setup) - [Cloud Firestore API](https://cloud.google.com/python/docs/reference/firestore/latest/index.html) - [psycopg](https://www.psycopg.org/docs/) --- # Source: https://neon.com/llms/import-migrate-from-heroku.txt # Migrate from Heroku to Neon Postgres > The document outlines the process for migrating a PostgreSQL database from Heroku to Neon, detailing steps for exporting data from Heroku, configuring Neon, and importing the data into Neon using the Migration Assistant tool. ## Source - [Migrate from Heroku to Neon Postgres HTML](https://neon.com/docs/import/migrate-from-heroku): The original HTML version of this documentation This guide describes how to import your data from Heroku Postgres to Neon. **Note** New feature: If you are looking to migrate your database to Neon, you may want to try our new **Migration Assistant**, which can help. Read the [guide](https://neon.com/docs/import/migration-assistant) to learn more. The instructions assume that you have installed the Heroku CLI, which is used to transfer data from Heroku. For installation instructions, see [The Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli). ## Create a Neon project and copy the connection string 1. Navigate to the [Projects](https://console.neon.tech/app/projects) page in the Neon Console. 2. Click **New Project**. 3. Specify your project settings and click **Create Project**. 4. After creating a project, you are directed to the Neon **Dashboard**, where you can click **Connect** to find your database connection details. Copy the connection string. It is required to import your data from Heroku. The example connection string used the instructions that follow is: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` ## Retrieve your Heroku app name and database name 1. Log in to [Heroku](https://dashboard.heroku.com/) and select the project you want to import data from. 1. Select **Overview** and copy the name of the Heroku Postgres database, which appears under **Installed add-ons**. 1. Click **Settings** and copy your Heroku **App Name**. **Note**: You can also retrieve the Heroku Postgres database name using the following Heroku CLI command: ```shell heroku pg:links --app ``` where `` is the Heroku App Name. For example: ```shell $ heroku pg:links --app thawing-wave-57227 === postgresql-trapezoidal-48645 ``` ## Import your data From your terminal, run the following Heroku CLI command: ```shell heroku pg:pull --app [app] [heroku-pg-database] [neon-connection-string] ``` where: - `[app]` is the name of the Heroku app - `[heroku-pg-database]` is the name of the Heroku PostgreSQL database - `[neon-connection-string]` is the Neon connection string For example: ```shell $ heroku pg:pull --app thawing-wave-57227 postgresql-trapezoidal-48645 postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require heroku-cli: Pulling postgresql-trapezoidal-48645 ---> postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require pg_dump: last built-in OID is 16383 pg_dump: reading extensions pg_dump: identifying extension members pg_dump: reading schemas pg_dump: reading user-defined tables pg_dump: reading user-defined functions pg_dump: reading user-defined types pg_dump: reading procedural languages pg_dump: reading user-defined aggregate functions pg_dump: reading user-defined operators pg_dump: reading user-defined access methods pg_dump: reading user-defined operator classes pg_dump: reading user-defined operator families pg_dump: reading user-defined text search parsers pg_dump: reading user-defined text search templates pg_dump: reading user-defined text search dictionaries pg_dump: reading user-defined text search configurations pg_dump: reading user-defined foreign-data wrappers pg_dump: reading user-defined foreign servers pg_dump: reading default privileges pg_dump: reading user-defined collations pg_dump: reading user-defined conversions pg_dump: reading type casts pg_dump: reading transforms pg_dump: reading table inheritance information pg_dump: reading event triggers pg_dump: finding extension tables pg_dump: finding inheritance relationships pg_dump: reading column info for interesting tables pg_dump: finding the columns and types of table "public.customer" pg_dump: finding the columns and types of table "public.order" pg_dump: flagging inherited columns in subtables pg_dump: reading indexes pg_dump: reading indexes for table "public.customer" pg_dump: reading indexes for table "public.order" pg_dump: flagging indexes in partitioned tables pg_dump: reading extended statistics pg_dump: reading constraints pg_dump: reading foreign key constraints for table "public.customer" pg_dump: reading foreign key constraints for table "public.order" pg_dump: reading triggers pg_dump: reading triggers for table "public.customer" pg_dump: reading triggers for table "public.order" pg_dump: reading rewrite rules pg_dump: reading policies pg_dump: reading row-level security policies pg_dump: reading publications pg_dump: reading publication membership pg_dump: reading subscriptions pg_dump: reading large objects pg_dump: reading dependency data pg_dump: saving encoding = UTF8 pg_dump: saving standard_conforming_strings = on pg_dump: saving search_path = pg_dump: saving database definition pg_dump: dumping contents of table "public.customer" pg_restore: connecting to database for restore pg_dump: dumping contents of table "public.order" pg_restore: creating SCHEMA "heroku_ext" pg_restore: creating TABLE "public.customer" pg_restore: creating TABLE "public.order" pg_restore: processing data for table "public.customer" pg_restore: processing data for table "public.order" pg_restore: creating CONSTRAINT "public.customer customer_pkey" pg_restore: creating CONSTRAINT "public.order order_pkey" pg_restore: creating FK CONSTRAINT "public.order order_customer_id_fkey" heroku-cli: Pulling complete. ``` ## Verify that your data was imported 1. Log in to the [Neon Console](https://console.neon.tech/app/projects). 2. Select the Neon project that you transferred data to. 3. Select the **Tables** tab. 4. In the sidebar, verify that your database tables appear under the **Tables** heading. --- # Source: https://neon.com/llms/import-migrate-from-neon.txt # Migrate data from another Neon project > The document outlines the process for migrating data between Neon projects, detailing steps for exporting data from the source project and importing it into the target project using Neon's tools and commands. ## Source - [Migrate data from another Neon project HTML](https://neon.com/docs/import/migrate-from-neon): The original HTML version of this documentation This guide describes how to migrate a database from one Neon project to another by piping data from `pg_dump` to `pg_restore`. **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. Use these instructions to: - Import a database from a Neon project created in one region to a project created in another region. - Import a database from a Neon project created with one Postgres version to a Neon project created with another Postgres version. **Tip**: You can also use these alternative methods to migrate data between Neon projects: - **Import Data Assistant**: A fast and simple option for databases under 10 GB. See [Import Data Assistant](https://neon.com/docs/import/import-data-assistant). - **Logical replication**: Move your data from one Neon project to another. Consider this option for large databases requiring near-zero downtime. See [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). ## Important considerations - **Upgrading the Postgres version**: When upgrading to a new version of Postgres, always test thoroughly before migrating your production systems or applications. We also recommend familiarizing yourself with the changes in the new version of Postgres, especially those affecting compatibility. For information about those changes, please refer to the official Postgres [Release 15](https://www.postgresql.org/docs/release/15.0/) or [Release 16](https://www.postgresql.org/docs/16/release-16.html) documentation. - **Piping considerations**: Piping is not recommended for large datasets, as it is susceptible to failures during lengthy migration operations (see [Pipe pg_dump to pg_restore](https://neon.com/docs/import/migrate-from-postgres#pipe-pgdump-to-pgrestore) for more information). If your dataset is large, we recommend performing the dump and restore as separate operations. For instructions, see [Migrate data from Postgres with pg_dump and pg_restore](https://neon.com/docs/import/migrate-from-postgres). ## Import data from another project To import your data from another Neon project: 1. Create a new project with the desired region or Postgres version. See [Create a project](https://neon.com/docs/manage/projects#create-a-project) for instructions. 2. Create a database with the desired name in your new Neon project. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for instructions. 3. Retrieve the connection strings for the new and existing Neon databases. You can find the connection details for your database by clicking the **Connect** button on your **Project Dashboard**. Connections strings have this format: ```bash postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` 4. Prepare your command to pipe data from one Neon project to the other. For the `pg_dump` command, specify connection details for the source database. For the `pg_restore` command, specify connection details for the destination database. The command should have the following format: ```bash pg_dump -Fc -v -d postgresql://[user]:[password]@[source_neon_hostname]/[dbname] | pg_restore -v -d postgresql://[user]:[password]@[destination_neon_hostname]/[dbname] ``` With actual source and destination connection details, your command will appear similar to this: ```bash pg_dump -Fc -v -d postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/my_source_db?sslmode=require&channel_binding=require | pg_restore -v -d postgresql://alex:AbC123dEf@square-shadow-654321.us-east-2.aws.neon.tech/my_destination_db?sslmode=require&channel_binding=require ``` **Note**: While your source and destination databases might have the same name, the hostnames will differ, as illustrated in the example above. The command includes these arguments: - `-Fc`: Sends the output to a custom-format archive suitable for input into `pg_restore`. - `-v`: Runs commands in verbose mode, allowing you to monitor what happens during the operation. - `-d`: Specifies the database name or connection string. 5. Run the command from your terminal or command window. 6. If you no longer require the old project, you can remove it. See [Delete a project](https://neon.com/docs/manage/projects#delete-a-project) for instructions. --- # Source: https://neon.com/llms/import-migrate-from-postgres.txt # Migrate data from Postgres with pg_dump and pg_restore > The document details the process of migrating data from a PostgreSQL database to Neon using the `pg_dump` and `pg_restore` tools, outlining step-by-step instructions for executing the migration efficiently. ## Source - [Migrate data from Postgres with pg_dump and pg_restore HTML](https://neon.com/docs/import/migrate-from-postgres): The original HTML version of this documentation This topic describes migrating data from one Postgres database to another using the `pg_dump` and `pg_restore`. **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. Repeat the `pg_dump` and `pg_restore` process for each database you want to migrate. If you are performing this procedure to migrate data from one Neon project to another to upgrade to a new Postgres version, read [Upgrading your Postgres version](https://neon.com/docs/postgresql/postgres-upgrade) first. ## Before you begin - We recommended that you use the `pg_dump` and `pg_restore` programs from the latest version of Postgres, to take advantage of enhancements that might have been made in these programs. To check the version of `pg_dump` or `pg_restore`, use the `-V` option. For example: `pg_dump -V`. - Neon supports PostgreSQL 14, 15, 16, and 17. We recommend that clients are the same version as source Postgres instance. - Retrieve the connection parameters or connection string for your source Postgres database. This could be a Neon Postgres database or another Postgres database. The instructions below use a [connection string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING), but you can use the connection format you prefer. If you are logged in to a local Postgres instance, you may only need to provide the database name. Refer to the [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) documentation for information about connection parameters. - Optionally, create a role in Neon to perform the restore operation. The role that performs the restore operation becomes the owner of restored database objects. For example, if you want role `sally` to own database objects, create `role` sally in Neon and perform the restore operation as `sally`. - If you have assigned database object ownership to different roles in your source database, read [Database object ownership considerations](https://neon.com/docs/import/migrate-from-postgres#database-object-ownership-considerations). You may want to add the `-O, --no-owner` option to your `pg_restore` command to avoid errors. - Create the target database in Neon. For example, if you are migrating a database named `pagila`, create a database named `pagila` in Neon. For instructions, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). - Retrieve the connection string for the target Neon database. You can find the connection string by clicking the **Connect** button on your **Project Dashboard**. It will look something like this: ```bash postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` - Consider running a test migration first to ensure your actual migration goes smoothly. See [Run a test migration](https://neon.com/docs/import/migrate-from-postgres#run-a-test-migration). - If your database is small, you can pipe `pg_dump` output directly to `pg_restore` to save time. See [Pipe pg_dump to pg_restore](https://neon.com/docs/import/migrate-from-postgres#pipe-pgdump-to-pgrestore). ## Export data with pg_dump Export your data from the source database with `pg_dump`: ```bash pg_dump -Fc -v -d -f ``` The `pg_dump` command above includes these arguments: - `-Fc`: Sends the output to a custom-format archive suitable for input into `pg_restore`. - `-v`: Runs `pg_dump` in verbose mode, allowing you to monitor what happens during the dump operation. - `-d`: Specifies the source database name or [connection string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). - `-f`: The dump file name. It can be any name you choose (`mydumpfile.bak`, for example). For more command options, see [Advanced pg_dump and pg_restore options](https://neon.com/docs/import/migrate-from-postgres#advanced-pgdump-and-pgrestore-options). ## Restore data to Neon with pg_restore Restore your data to the target database in Neon with `pg_restore`. **Note**: If you assigned database object ownership to different roles in your source database, consider adding the `-O, --no-owner` option to your `pg_restore` command to avoid errors. See [Database object ownership considerations](https://neon.com/docs/import/migrate-from-postgres#database-object-ownership-considerations). ```bash pg_restore -v -d ``` The example above includes these arguments: - `-v`: Runs `pg_restore` in verbose mode, allowing you to monitor what happens during the restore operation. - `-d`: Specifies the Neon database to connect to. The value is a Neon database connection string. See [Before you begin](https://neon.com/docs/import/migrate-from-postgres#before-you-begin). - `` is the name of the dump file you created with `pg_dump`. For more command options, see [Advanced pg_dump and pg_restore options](https://neon.com/docs/import/migrate-from-postgres#advanced-pgdump-and-pgrestore-options). ## pg_dump and pg_restore example The following example shows how data from a `pagila` source database is dumped and restored to a `pagila` database in Neon using the commands described in the previous sections. (A database named `pagila` was created in Neon prior to running the restore operation.) ```bash ~$ cd mydump ~/mydump$ pg_dump -Fc -v -d postgresql://[user]:[password]@[neon_hostname]/pagila -f mydumpfile.bak ~/mydump$ ls mydumpfile.bak ~/mydump$ pg_restore -v -d postgresql://[user]:[password]@[neon_hostname]/pagila mydumpfile.bak ``` ## Pipe pg_dump to pg_restore For small databases where the source and target Postgres instances and databases are presumed to be compatible, the standard output of `pg_dump` can be piped directly into a `pg_restore` command to minimize migration downtime: ```bash pg_dump [args] | pg_restore [args] ``` For example: ```bash pg_dump -Fc -v -d | pg_restore -v -d ``` Piping is not recommended for large databases because it can fail during lengthy migration operations. Incompatibilities between the source and target Postgres instances or databases may also cause a piping operation to fail. If you're importing from another Postgres instance, review Neon's [compatibility](https://neon.com/docs/reference/compatibility) page to ensure that Neon Postgres is compatible with your source Postgres instance. If you're unsure or encounter issues, consider using separate dump and restore operations. This approach lets you adjust dump and restore options or modify the dump file directly to resolve migration challenges. When piping `pg_dump` output directly to `pg_restore`, the custom output format (`-Fc`) is most efficient. The directory format (`-Fd`) format cannot be piped to `pg_restore`. ## Post-migration steps After migrating your data, update your applications to connect to your new database in Neon. You will need the database connection string that you used in your `pg_restore` command. If you run into any problems, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). After connecting your applications, test them thoroughly to ensure they function correctly with your new database. ## Database object ownership considerations Roles created in the Neon Console, including the default role created with your Neon project, are automatically granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. This role can create roles and databases, select from all tables and views, and insert, update, or delete data in all tables. However, the `neon_superuser` is not a PostgreSQL `superuser`. It cannot run `ALTER OWNER` statements to grant ownership of database objects. As a result, if you granted ownership of database objects in your source database to different roles, your dump file will contain `ALTER OWNER` statements, and those statements will cause non-fatal errors when you restore data to your Neon database. **Note**: Regardless of `ALTER OWNER` statement errors, a restore operation still succeeds because assigning ownership is not necessary for the data itself to be restored. The restore operation will still create tables, import data, and create other objects. To avoid the non-fatal errors, you can ignore database object ownership statements when restoring data by specifying the `-O, --no-owner` option in your `pg_restore` command: ```bash pg_restore -v -O -d postgresql://[user]:[password]@[neon_hostname]/pagila mydumpfile.bak ``` The Neon role performing the restore operation becomes the owner of all database objects. ## Advanced pg_dump and pg_restore options The `pg_dump` and `pg_restore` commands provide numerous advanced options, some of which are described below. Full descriptions and more options are found in the PostgreSQL [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) documentation. ### pg_dump options - `-Z`: Defines the compression level to use when using a compressible format. 0 means no compression, while 9 means maximum compression. In general, we recommend a setting of 1. A higher compression level slows the dump and restore process but also uses less disk space. - `--lock-wait-timeout=20s`: Error out early in the dump process instead of waiting for an unknown amount of time if there is lock contention. Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified timeout.` - `-j `: Consider this option for large databases to dump tables in parallel. Set `` to the number of available CPUs. Refer to the [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) documentation for more information. - `--no-blobs`: Excludes large objects from your dump. See [Data migration notes](https://neon.com/docs/import/migrate-from-postgres#data-migration-notes). ### pg_restore options - `-c --if-exists`: Drop database objects before creating them if they already exist. If you had a failed migration, you can use these options to drop objects created by the previous migration to avoid errors when retrying the migration. - `-j `: Consider this option for large databases to run the restore process in parallel. Set `` to the number of available vCPUs. Refer to the [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) documentation for more information. - `--single-transaction`: Forces the operation to run as an atomic transaction, which ensures that no data is left behind when a restore operation fails. Retrying an import operation after a failed attempt that leaves data behind may result in "duplicate key value" errors. - `--no-tablespaces`: Do not output commands to select tablespaces. See [Data migration notes](https://neon.com/docs/import/migrate-from-postgres#data-migration-notes). - `-t `: Allows you to restore individual tables from a custom-format database dump. Individual tables can also be imported from a CSV file. See [Import from CSV](https://neon.com/docs/import/import-from-csv). ## Run a test migration It is recommended that you run a test migration before migrating your production database. Make sure you can successfully migrate data to the new database and connect to it. Before starting the actual migration, create a database dump and address any issues that show up. In Neon, you can quickly create a test database, obtain the connection string, and delete the database when you are finished with it. See [Create a database](https://neon.com/docs/manage/databases#create-a-database). ## Other migration options This section discusses migration options other than `pg_dump` and `pg_restore`. ### Postgres GUI clients Some Postgres clients offer backup and restore capabilities. These include [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/backup_and_restore.html) and [phppgadmin](https://github.com/phppgadmin/phppgadmin/releases), among others. We have not tested migrations using these clients, but if you are uncomfortable using command-line utilities, they may provide an alternative. ### Table-level data migration Table-level data migration (using CSV files, for example) does not preserve database schemas, constraints, indexes, types, or other database features. You will have to create these separately. Table-level migration is simple but could result in significant downtime depending on the size of your data and the number of tables. For instructions, see [Import data from CSV](https://neon.com/docs/import/import-from-csv). ## Data migration notes - You can load data using the `psql` utility, but it only supports plain-text SQL dumps, which you should only consider for small datasets or specific use cases. To create a plain-text SQL dump with `pg_dump` utility, leave out the `-F` format option. Plain-text SQL is the default `pg_dump` output format. - `pg_dumpall` is not supported. - `pg_dump` with the `-C, --create` option is not supported. - Some PostgreSQL features, such as tablespaces and large objects, which require access to the local file system are not supported by Neon. To exclude selecting tablespaces, specify the `--no-tablespaces` option with `pg_restore`. To exclude large objects, specify the `--no-blobs` option with `pg_dump`. ## Reference For information about the Postgres client utilities referred to in this topic, refer to the following topics in the Postgres documentation: - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) - [psql](https://www.postgresql.org/docs/current/app-psql.html) --- # Source: https://neon.com/llms/import-migrate-from-render.txt # Migrate from Render to Neon Postgres > The document outlines the steps required to migrate a PostgreSQL database from Render to Neon, detailing the necessary configurations and commands to ensure a smooth transition within Neon's infrastructure. ## Source - [Migrate from Render to Neon Postgres HTML](https://neon.com/docs/import/migrate-from-render): The original HTML version of this documentation This guide describes how to migrate a database from Render to Neon Postgres. We use the `pg_dump` and `pg_restore` utilities, which are part of the Postgres client toolset. `pg_dump` works by dumping both the schema and data in a custom format that is compressed and suitable for input into `pg_restore` to rebuild the database. ## Prerequisites - A Render project containing the Postgres database you want to migrate. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). Make sure to create a project with the same Postgres version as your Render deployment. - `pg_dump` and `pg_restore` utilities installed on your local machine. These typically come with a Postgres installation. We recommended that you use the `pg_dump` and `pg_restore` programs from the latest version of Postgres, to take advantage of enhancements that might have been made in these programs. To check the version of `pg_dump` or `pg_restore`, use the `-V` option. For example: `pg_dump -V`. - Review our guide on [Migrating data from Postgres](https://neon.com/docs/import/migrate-from-postgres) for more comprehensive information on using `pg_dump` and `pg_restore`. ## Prepare your Render database This section describes how to prepare your Render database for exporting data. To illustrate the migration workflow, we use the [LEGO Database](https://neon.com/docs/import/import-sample-data#lego-database). This database contains information about LEGO sets, parts, and themes. We load the LEGO database into Render using the [psql](https://neon.com/docs/connect/query-with-psql-editor) command-line tool. ### Retrieve Render connection details 1. Log in to your Render account and navigate to your project dashboard. 2. From the overview page, select the service (of the type `PostgreSQL`) corresponding to your database. 3. From the left sidebar, click on **Info** and under the **Connections** section, you'll find the connection parameters in different formats. 4. Copy the value for the `External Database URL` field. You'll need this connection string for `pg_dump` to connect to the Render database. ## Export data with pg_dump Now that you have your Render connection details, you can export your data using `pg_dump`: ```bash pg_dump -Fc -v -d --schema=public -f render_dump.bak ``` Replace `` with your Render External Database URL. This command includes these arguments: - `-Fc`: Outputs the dump in custom format, which is compressed and suitable for input into `pg_restore`. - `-v`: Runs `pg_dump` in verbose mode, allowing you to monitor the dump operation. - `-d`: Specifies the connection string for your Render database. - `-f`: Specifies the output file name. - `--schema=public`: Specifies the schema to dump. In this case, we only want to back up tables in the `public` schema. If the command was successful, you'll see output similar to the following: ```bash ... pg_dump: saving encoding = UTF8 pg_dump: saving standard_conforming_strings = on pg_dump: saving search_path = pg_dump: saving database definition pg_dump: dumping contents of table "public.lego_colors" pg_dump: dumping contents of table "public.lego_inventories" pg_dump: dumping contents of table "public.lego_inventory_parts" pg_dump: dumping contents of table "public.lego_inventory_sets" pg_dump: dumping contents of table "public.lego_part_categories" pg_dump: dumping contents of table "public.lego_parts" pg_dump: dumping contents of table "public.lego_sets" pg_dump: dumping contents of table "public.lego_themes" ``` **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. ## Prepare your Neon destination database This section describes how to prepare your destination Neon Postgres database to receive the imported data. ### Create the Neon database To maintain consistency with your Render setup, you might want to create a new database in Neon with the same database name used in Render. 1. Connect to your Neon project using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a Postgres client like `psql`. 2. Create a new database. For example, if your Render database was named `lego`, run: ```sql CREATE DATABASE lego; ``` For more information, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). ### Retrieve Neon connection details 1. In the Neon Console, go to your **Project Dashboard**. 2. Select **Connect** to open the **Connect to your database** modal. 3. Copy the connection string. It will look similar to this: ``` postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## Restore data to Neon with pg_restore Now you can restore your data to the Neon database using `pg_restore`: ```bash pg_restore -d -v --no-owner --no-acl render_dump.bak ``` Replace `` with your Neon connection string. This command includes these arguments: - `-d`: Specifies the connection string for your Neon database. - `-v`: Runs `pg_restore` in verbose mode. - `--no-owner`: Skips setting the ownership of objects as in the original database. - `--no-acl`: Skips restoring access privileges for objects as in the original database. We recommend using the `--no-owner` and `--no-acl` options to skip restoring ownership and access control settings from Render. After migrating the data, review and configure the appropriate roles and privileges for all objects, as needed. For more information, refer to the section on [Database object ownership considerations](https://neon.com/docs/import/migrate-from-postgres#database-object-ownership-considerations). If the command was successful, you'll see output similar to the following: ```bash pg_restore: connecting to database for restore pg_restore: creating SCHEMA "public" pg_restore: creating TABLE "public.lego_colors" pg_restore: creating SEQUENCE "public.lego_colors_id_seq" pg_restore: creating SEQUENCE OWNED BY "public.lego_colors_id_seq" pg_restore: creating TABLE "public.lego_inventories" pg_restore: creating SEQUENCE "public.lego_inventories_id_seq" ... ``` ## Verify the migration After the restore process completes, you should verify that your data has been successfully migrated: 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or [psql](https://neon.com/docs/connect/query-with-psql-editor). 2. Run some application queries to check your data. For example, if you're using the LEGO database, you can run the following: ```sql SELECT * FROM lego_inventory_parts ORDER BY quantity DESC LIMIT 5; SELECT parent_id, COUNT(name) FROM lego_themes GROUP BY parent_id; ``` 3. Compare the results with those from running the same queries on your Render database to ensure data integrity. ## Clean up After successfully migrating and verifying your data on Neon, you can update your application's connection strings to point to your new Neon database. We recommend that you keep your Render database dump file (`render_dump.bak`) as a backup until you've verified that the migration was successful. ## Other migration options While this guide focuses on using `pg_dump` and `pg_restore`, there are other migration options available: - **Logical replication** For larger databases or scenarios where you need to minimize downtime, you might consider using logical replication. See our guide on [Logical replication](https://neon.com/docs/guides/logical-replication-guide) for more information. - **CSV export/import** For smaller datasets or specific tables, you might consider exporting to CSV from Render and then importing to Neon. See [Import data from CSV](https://neon.com/docs/import/import-from-csv) for more details on this method. ## Reference For more information on the Postgres utilities used in this guide, refer to the following documentation: - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) - [Migrating data to Neon](https://neon.com/docs/import/migrate-from-postgres) --- # Source: https://neon.com/llms/import-migrate-from-supabase.txt # Migrate from Supabase to Neon Postgres > The document outlines the steps for migrating a database from Supabase to Neon Postgres, detailing the process of exporting data from Supabase and importing it into Neon. ## Source - [Migrate from Supabase to Neon Postgres HTML](https://neon.com/docs/import/migrate-from-supabase): The original HTML version of this documentation This guide describes how to migrate a database from Supabase to Neon Postgres. We use the `pg_dump` and `pg_restore` utilities, which are part of the Postgres client toolset. `pg_dump` works by dumping both the schema and data in a custom format that is compressed and suitable for input into `pg_restore` to rebuild the database. **Note**: You can also replicate data from Supabase for a near-zero downtime migration. See [Replicate data from Supabase](https://neon.com/docs/guides/logical-replication-supabase-to-neon). ## Prerequisites - A Supabase project containing the data you want to migrate. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). Make sure to create a project with the same Postgres version as your Supabase deployment. - `pg_dump` and `pg_restore` utilities installed on your local machine. These typically come with a Postgres installation. We recommended that you use the `pg_dump` and `pg_restore` programs from the latest version of Postgres, to take advantage of enhancements that might have been made in these programs. To check the version of `pg_dump` or `pg_restore`, use the `-V` option. For example: `pg_dump -V`. - Review our guide on [Migrating data from Postgres](https://neon.com/docs/import/migrate-from-postgres) for more comprehensive information on using `pg_dump` and `pg_restore`. ## Prepare your Supabase database This section describes how to prepare your Supabase database for exporting data. To illustrate the migration workflow, we use the [LEGO Database](https://neon.com/docs/import/import-sample-data#lego-database). This database contains information about LEGO sets, parts, and themes. ### Retrieve Supabase connection details 1. Log in to your Supabase account and navigate to your project dashboard. 2. In the left sidebar, click on **Project Settings**. 3. Select **Database**, where you will find the below settings under the **Connection Parameters** section: - Host - Database name - Port - User - Password [Not visible in the dashboard] You'll need these details to construct your connection string for `pg_dump`. ## Export data with pg_dump Now that you have your Supabase connection details, you can export your data using `pg_dump`: ```bash pg_dump -Fc -v -d postgresql://[user]:[password]@[supabase_host]:[port]/[database] --schema=public -f supabase_dump.bak ``` Replace `[user]`, `[password]`, `[supabase_host]`, `[port]`, and `[database]` with your Supabase connection details. This command includes these arguments: - `-Fc`: Outputs the dump in custom format, which is compressed and suitable for input into `pg_restore`. - `-v`: Runs `pg_dump` in verbose mode, allowing you to monitor the dump operation. - `-d`: Specifies the connection string for your Supabase database. - `-f`: Specifies the output file name. - `--schema=public`: Specifies the schema to dump. In this case, we only want to back up tables in the `public` schema. Supabase projects may also store data corresponding to authentication, storage and other services under different schemas. If necessary, you can specify additional schemas to dump by adding the `--schema` option multiple times. If the command was successful, you'll see output similar to the following: ```bash ... pg_dump: saving encoding = UTF8 pg_dump: saving standard_conforming_strings = on pg_dump: saving search_path = pg_dump: saving database definition pg_dump: dumping contents of table "public.lego_colors" pg_dump: dumping contents of table "public.lego_inventories" pg_dump: dumping contents of table "public.lego_inventory_parts" pg_dump: dumping contents of table "public.lego_inventory_sets" pg_dump: dumping contents of table "public.lego_part_categories" pg_dump: dumping contents of table "public.lego_parts" pg_dump: dumping contents of table "public.lego_sets" pg_dump: dumping contents of table "public.lego_themes" ``` **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. ## Prepare your Neon destination database This section describes how to prepare your destination Neon Postgres database to receive the imported data. ### Create the Neon database To maintain consistency with your Supabase setup, you can create a new database in Neon with the same database name you used in Supabase. 1. Connect to your Neon project using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or a Postgres client like [psql](https://neon.com/docs/connect/query-with-psql-editor). 2. Create a new database. For example, if your Supabase database was named `lego`, run: ```sql CREATE DATABASE lego; ``` For more information, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). ### Retrieve Neon connection details 1. In the Neon Console, go to your project dashboard. 2. Select **Connect** to open the **Connect to your database** modal. 3. Copy the connection string. It will look similar to this: ``` postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` ## Restore data to Neon with pg_restore Now you can restore your data to the Neon database using `pg_restore`: ```bash pg_restore -d -v --no-owner --no-acl supabase_dump.bak ``` Replace `[user]`, `[password]`, `[neon_hostname]`, and `[dbname]` with your Neon connection details. This command includes these arguments: - `-d`: Specifies the connection string for your Neon database. - `-v`: Runs `pg_restore` in verbose mode. - `--no-owner`: Skips setting the ownership of objects as in the original database. - `--no-acl`: Skips restoring access privileges for objects as in the original database. A Supabase database has ownership and access control tied to the authentication system. We recommend that you use the `--no-owner` and `--no-acl` options to skip restoring these settings. After migrating the data, review and configure the appropriate roles and privileges for all objects, as needed. For more information, refer to the section on [Database object ownership considerations](https://neon.com/docs/import/migrate-from-postgres#database-object-ownership-considerations). If the command was successful, you'll see output similar to the following: ```bash pg_restore: connecting to database for restore pg_restore: creating SCHEMA "public" pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 13; 2615 2200 SCHEMA public pg_database_owner pg_restore: error: could not execute query: ERROR: schema "public" already exists Command was: CREATE SCHEMA public; pg_restore: creating COMMENT "SCHEMA public" pg_restore: creating TABLE "public.lego_colors" pg_restore: creating SEQUENCE "public.lego_colors_id_seq" pg_restore: creating SEQUENCE OWNED BY "public.lego_colors_id_seq" pg_restore: creating TABLE "public.lego_inventories" pg_restore: creating SEQUENCE "public.lego_inventories_id_seq" ... ``` ## Verify the migration After the restore process completes, you should verify that your data has been successfully migrated: 1. Connect to your Neon database using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or `psql`. 2. Run some application queries to check your data. For example, if you're using the LEGO database, you can run the following: ```sql SELECT COUNT(*) FROM lego_sets; SELECT * FROM lego_themes LIMIT 5; ``` 3. Compare the results with those from running the same queries on your Supabase database to ensure data integrity. ## Clean up After successfully migrating and verifying your data on Neon, you can update your application's connection strings to point to your new Neon database. We recommend that you keep your Supabase dump file (`supabase_dump.bak`) as a backup until you've verified that the migration was successful. ## Other migration options While this guide focuses on using `pg_dump` and `pg_restore`, there are other migration options available: - **Logical replication** For larger databases or scenarios where you need to minimize downtime, you might consider using logical replication. See our guide on [Logical replication](https://neon.com/docs/guides/logical-replication-guide) for more information. - **CSV export/import** For smaller datasets or specific tables, you might consider exporting to CSV from Supabase and then importing to Neon. See [Import data from CSV](https://neon.com/docs/import/import-from-csv) for more details on this method. ## Reference For more information on the Postgres utilities used in this guide, refer to the following documentation: - [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html) - [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) - [Migrating data to Neon](https://neon.com/docs/import/migrate-from-postgres) --- # Source: https://neon.com/llms/import-migrate-intro.txt # Neon data migration guides > The Neon data migration guides offer step-by-step instructions for migrating data to Neon, detailing processes for various data sources and ensuring seamless integration into the Neon database environment. ## Source - [Neon data migration guides HTML](https://neon.com/docs/import/migrate-intro): The original HTML version of this documentation Find instructions for migrating data from Postgres, CSV, other Neon projects, and other database providers. For near-zero downtime data migrations from other Postgres providers, consider using logical replication. Additionally, if you're new to Neon and want to try it out, our sample data guide provides datasets for exploration and testing. ## Data migration guides - [Import Data Assistant](https://neon.com/docs/import/import-data-assistant): Move your existing database to Neon using our guided migration tool - [Migrate with pg_dump and pg_restore](https://neon.com/docs/import/migrate-from-postgres): Migrate data from another Postgres database using pg_dump and pg_restore - [Migrate from another Neon project](https://neon.com/docs/import/migrate-from-neon): Migrate data from another Neon project for Postgres version, region, or account migration - [Migrate schema only](https://neon.com/docs/import/migrate-schema-only): Migrate only the schema from a Postgres database with pg_dump and pg_restore - [Import data from CSV](https://neon.com/docs/import/import-from-csv): Import data from a CSV file using the psql command-line utility - [Migrate from Firebase Firestore](https://neon.com/docs/import/migrate-from-firebase): Migrate data from Firebase Firestore to Neon Postgres using a custom Python script - [Migrate from Heroku](https://neon.com/docs/import/migrate-from-heroku): Migrate data from a Heroku Postgres database to Neon Postgres using the Heroku CLI - [Migrate with AWS DMS](https://neon.com/docs/import/migrate-aws-dms): Migrate data from another database source to Neon using the AWS Data Migration Service - [Migrate from Azure](https://neon.com/docs/import/migrate-from-azure-postgres): Migrate from an Azure Database for PostgreSQL to Neon Postgres - [Migrate from Digital Ocean](https://neon.com/docs/import/migrate-from-digital-ocean): Migrate data from Digital Ocean Postgres to Neon Postgres with pg_dump and pg_restore - [Import sample data](https://neon.com/docs/import/import-sample-data): Import one of several sample datasets for exploration and testing - [Migrate from MySQL](https://neon.com/docs/import/migrate-mysql): Migrate your MySQL data to Neon Postgres using pgloader. - [Migrate from Render](https://neon.com/docs/import/migrate-from-render): Migrate data from Render to Neon Postgres with pg_dump and pg_restore - [Migrate from Supabase](https://neon.com/docs/import/migrate-from-supabase): MIgrate data from Supabase to Neon Postgres with pg_dump and pg_restore - [Migrate with pgcopydb](https://neon.com/docs/import/pgcopydb): Migrate data from another Postgres database using pgcopydb for parallel processing ## Use logical replication for near-zero downtime data migrations Postgres logical replication in Neon provides an efficient way to migrate data from other Postgres providers with minimal downtime. By replicating data in real-time, this method allows you to transition your applications to Neon without interrupting your services. Please refer to our logical replication guides for instructions. - [AlloyDB](https://neon.com/docs/guides/logical-replication-alloydb): Replicate data from AlloyDB to Neon - [Cloud SQL](https://neon.com/docs/guides/logical-replication-cloud-sql): Replicate data from Cloud SQL to Neon - [PostgreSQL to Neon](https://neon.com/docs/guides/logical-replication-postgres-to-neon): Replicate data from PostgreSQL to Neon - [AWS RDS](https://neon.com/docs/guides/logical-replication-rds-to-neon): Replicate data from AWS RDS PostgreSQL to Neon - [Supabase](https://neon.com/docs/guides/logical-replication-supabase-to-neon): Replicate data from Supabase to Neon - [Azure PostgreSQL](https://neon.com/docs/import/migrate-from-azure-postgres): Replicate data from Azure PostgreSQL to Neon --- # Source: https://neon.com/llms/import-migrate-mssql.txt # Migrate from Microsoft SQL Server to Neon Postgres > The document outlines the process for migrating databases from Microsoft SQL Server to Neon Postgres, detailing the necessary steps and tools required to facilitate a smooth transition within Neon's infrastructure. ## Source - [Migrate from Microsoft SQL Server to Neon Postgres HTML](https://neon.com/docs/import/migrate-mssql): The original HTML version of this documentation This guide describes how to migrate your database from a Microsoft SQL Server (MSSQL) database to Neon Postgres using [pgloader](https://pgloader.readthedocs.io/en/latest/intro.html). The `pgloader` utility transforms data to a Postgres-compatible format as it reads from your MSSQL database. It uses the Postgres `COPY` protocol to stream the data into your Postgres database. ## Prerequisites - An MSSQL instance containing the data you want to migrate. For this guide, we use `Azure SQL`, which is a managed cloud-based offering of Microsoft SQL server. We set up an Azure SQL Database and populate it with the [Northwind sample dataset](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases/northwind-pubs). This dataset contains sales data corresponding to a fictional company that imports and exports food products, organized across multiple tables. - A Neon project to move the data to. For detailed information on creating a Neon project, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). - Neon's Free plan supports 0.5 GB of data. If your data size is more than 0.5 GB, you'll need to upgrade to one of Neon's paid plans. See [Neon plans](https://neon.com/docs/introduction/plans) for more information. - Review the [Pgloader MSSQL to Postgres Guide](https://pgloader.readthedocs.io/en/latest/ref/mssql.html) guide. It will provide you with a good understanding of `pgloader` capabilities and how to configure your `pgloader` configuration file, if necessary. - See [Pgloader configuration](https://neon.com/docs/import/migrate-mssql#pgloader-configuration) for a `pgloader` configuration file update that may be required to connect to MSSQL from `pgloader`. ## Prepare your MSSQL database ### Retrieve Your MSSQL database credentials Before starting the migration process, collect your MSSQL database credentials. If you are using Azure SQL, you can use the following steps to retrieve them: 1. Log into the Azure portal and navigate to your Azure SQL Database resource. 2. Navigate to the **Connection strings** tab under the `Settings` section and identify the connection string for your database. Make note of the following details: - Server - Database - User - Password (Not displayed in the Azure portal) Keep the database connection details handy for later use. ### Allow inbound traffic from Neon If you are using Azure SQL, you need to allow inbound traffic from your local machine, so `pgloader` can connect to your database. To do this, follow these steps: 1. Log into the Azure portal and navigate to your Azure SQL Server resource. 2. Click on the **Networking** option under the `Settings` section in the sidebar. Navigate to the **Firewall Rules** section under the `Public access` tab. 3. Click on the `Add your Client IPv4 address` option, which will automatically create a new rule with the IP address of your local machine. If you are running `pgloader` elsewhere, replace both the `Start IP` and `End IP` fields with the IP address of that machine. 4. CLick `Save` at the bottom to make sure all changes are saved. ## Prepare your Neon destination database This section describes how to prepare your destination Neon PostgreSQL database to receive the migrated data. ### Create the Neon database To maintain parity with the MSSQL deployment, you might want to create a new database in Neon with the same name. Refer to the [Create a database](https://neon.com/docs/manage/databases#create-a-database) guide for more information. For this example, we will create a new database named `Northwind` in the Neon project. Use `psql` to connect to your Neon project (alternatively, you can use the `Query editor` in the Neon console) and run the following query: ```sql CREATE DATABASE "Northwind"; ``` ### Retrieve your Neon database connection string Log in to the Neon Console. Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Now, modify the connection string as follows to pass your **endpoint ID** (`ep-cool-darkness-123456` in this example) to Neon with your password using the `endpoint` keyword, as shown here: ```bash postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Note**: Passing the `endpoint ID` with your password is a required workaround for some Postgres drivers, including the one used by `pgloader`. For more information about this workaround and why it's required, refer to our [connection workaround](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field) documentation. Keep your Neon connection string handy for later use. ## Install pgloader Here's how you can set up `pgloader` for your database migration: 1. Install the `pgloader` utility using your preferred installation method. Debian (apt), RPM package, and Docker methods are supported, as well as Homebrew for macOS (`brew install pgloader`). If your macOS has an ARM processor, use the Homebrew installation method. See [Installing pgloader](https://pgloader.readthedocs.io/en/latest/install.html) for Debian (apt), RPM package, and Docker installation instructions. 2. Create a `pgloader` configuration file (e.g., `mssql_to_neon.load`). Use your MSSQL database credentials to define the connection string for your database source. Use the Neon database connection string as the destination. Example configuration in `mssql_to_neon.load`: ```plaintext LOAD DATABASE FROM mssql://migration_user:password@host:port/AdventureWorks INTO postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Make sure to replace the connection string values with your own MSSQL and Neon credentials. ## Run the migration with pgloader To initiate the migration process, run: ```shell pgloader mssql_to_neon.load ``` The command output will show the progress of the migration, including any errors encountered and the total time taken. For our sample dataset, the output looks similar to this: ```plaintext 2024-09-12T10:46:54.307953Z LOG report summary reset table name errors read imported bytes total time read write ------------------------ --------- --------- --------- --------- -------------- --------- --------- fetch meta data 0 65 65 0.280s Create Schemas 0 0 0 0.116s Create SQL Types 0 0 0 0.232s Create tables 0 26 26 9.120s Set Table OIDs 0 13 13 0.120s ------------------------ --------- --------- --------- --------- -------------- --------- --------- dbo.customercustomerdemo 0 0 0 1.300s 0.124s dbo.categories 0 8 8 64.4 kB 1.224s 0.144s 0.004s dbo.customers 0 91 91 11.3 kB 2.520s 0.140s dbo.customerdemographics 0 0 0 2.152s 0.088s dbo.employees 0 9 9 76.0 kB 3.088s 0.136s 0.004s dbo.employeeterritories 0 49 49 0.4 kB 3.112s 0.096s dbo.orders 0 830 830 118.5 kB 3.656s 1.380s 0.060s dbo."Order Details" 0 2155 2155 44.0 kB 3.268s 1.372s 0.008s dbo.region 0 4 4 0.2 kB 2.832s 0.132s dbo.products 0 77 77 4.2 kB 2.660s 0.132s dbo.suppliers 0 29 29 3.9 kB 3.508s 0.120s dbo.shippers 0 3 3 0.1 kB 2.892s 0.104s dbo.territories 0 53 53 3.1 kB 3.568s 0.108s ------------------------ --------- --------- --------- --------- -------------- --------- --------- COPY Threads Completion 0 4 4 5.576s Create Indexes 0 39 39 14.252s Index Build Completion 0 39 39 3.072s Reset Sequences 0 6 6 1.500s Primary Keys 0 13 13 5.024s Create Foreign Keys 0 13 13 5.016s Create Triggers 0 0 0 0.256s Install Comments 0 0 0 0.000s ------------------------ --------- --------- --------- --------- -------------- --------- --------- Total import time ✓ 3308 3308 326.0 kB 34.696s 2024-09-12T10:46:54.339953Z INFO Stopping monitor ``` ## Verify the migration After the migration is complete, connect to your Neon database and run some queries to verify that the data has been transferred correctly. For example: ```sql SELECT productname, unitprice, unitsinstock FROM dbo.products WHERE discontinued = false ORDER BY unitprice DESC LIMIT 5; ``` This query returns the following result: ```plaintext productname | unitprice | unitsinstock ------------------------+-----------+-------------- Côte de Blaye | 263.5 | 17 Sir Rodney's Marmalade | 81.0 | 40 Carnarvon Tigers | 62.5 | 42 Raclette Courdavault | 55.0 | 79 Manjimup Dried Apples | 53.0 | 20 (5 rows) ``` Compare the results with the same queries run on your MSSQL database to ensure data integrity. ## Clean up After successfully migrating and verifying your data on Neon: 1. Consider backing up your MSSQL database before decommissioning it. 2. Update your application code to make SQL queries using the Postgres dialect. 3. Update your application's connection strings to point to your new Neon database. ## Other migration options While this guide focuses on using `pgloader`, you might need more manual adjustments to ensure: - There are no unintended changes to the application behavior. For example, all MSSQL data types don't translate one-to-one to Postgres data types. - The application code is compatible with Neon Postgres. For complex migrations or when you need more control over the migration process, you might consider developing a custom Extract, Transform, Load (ETL) process using tools like Python with SQLAlchemy. ## Pgloader configuration - `Pgloader` automatically detects table schemas, indexes, and constraints, but depending on the input table schemas, you might need to specify manual overrides in the configuration file. Refer to the [Command clauses](https://pgloader.readthedocs.io/en/latest/command.html#common-clauses) section of the `pgloader` documentation for more information. - With Azure SQL database, `pgloader` often runs into connection errors. To solve them, you might need to manually specify the FreeTDS driver configuration (which `pgloader` uses to connect to MSSQL). Please refer to the related issues in the [PGLoader GitHub repository](https://github.com/dimitri/pgloader/) for more information. Below is the section required to make `pgloader` work, at the time of writing. Replace the values with your own Azure SQL database credentials. ```plaintext # /etc/freetds/freetds.conf ... [host-name] tds version = 7.4 client charset = UTF-8 encrypt = require host = ... port = 1433 database = ... ``` ## Reference For more information on `pgloader` and database migration, refer to the following resources: - [pgloader documentation - MSSQL to Postgres](https://pgloader.readthedocs.io/en/latest/ref/mssql.html) - [Neon documentation](https://neon.com/docs/introduction) --- # Source: https://neon.com/llms/import-migrate-mysql.txt # Migrate from MySQL to Neon Postgres > The document outlines the process for migrating databases from MySQL to Neon Postgres, detailing the necessary steps and tools required to facilitate a smooth transition within the Neon environment. ## Source - [Migrate from MySQL to Neon Postgres HTML](https://neon.com/docs/import/migrate-mysql): The original HTML version of this documentation This topic describes how to migrate your MySQL database to Neon Postgres using [pgloader](https://pgloader.readthedocs.io/en/latest/intro.html). The `pgloader` utility transforms data to a Postgres-compatible format as it is read from your MySQL database. It uses the `COPY` Postgres protocol to stream the data into your Postgres database. ## Prerequisites Before you begin, make sure that you have the following: - A Neon account and a project. See [Sign up](https://neon.com/docs/get-started/signing-up). - A properly named database. For example, if you are migrating a database named `sakila`, you might want to create a database of the same name in Neon. See [Create a database](https://neon.com/docs/manage/databases#create-a-database) for instructions. - Neon's Free plan supports 0.5 GB of data. If your data size is more than 0.5 GB, you'll need to upgrade to one of Neon's paid plans. See [Neon plans](https://neon.com/docs/introduction/plans) for more information. Also, a close review of the [Pgloader MySQL to Postgres Guide](https://pgloader.readthedocs.io/en/latest/ref/mysql.html) guide is recommended before you start. This guide will provide you with a good understanding of `pgloader` capabilities and how to configure your `pgloader` configuration file, if necessary. ## Retrieve Your MySQL database credentials Before starting the migration process, collect your MySQL database credentials: 1. Log into your MySQL database provider. 2. Identify and record the following details or grab your MySQL database connection string. - Hostname or IP address - Database name - Username - Password Keep your MySQL database connection details handy for later use. ## Retrieve your Neon database connection string Log in to the Neon Console. Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` Now, modify the connection string as follows to pass your **endpoint ID** (`ep-cool-darkness-123456` in this example) to Neon with your password using the `endpoint` keyword, as shown here: ```bash postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` **Note**: Passing the `endpoint ID` with your password is a required workaround for some Postgres drivers, including the one used by `pgloader`. For more information about this workaround and why it's required, refer to our [connection workaround](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field) documentation. Keep your Neon connection string handy for later use. ### Install pgloader Here's how you can set up `pgloader` for your database migration: 1. Install the `pgloader` utility using your preferred installation method. Debian (apt), RPM package, and Docker methods are supported, as well as Homebrew for macOS (`brew install pgloader`). If your macOS has an ARM processor, use the Homebrew installation method. See [Installing pgloader](https://pgloader.readthedocs.io/en/latest/install.html) for Debian (apt), RPM package, and Docker installation instructions. 2. Create a `pgloader` configuration file (e.g., `config.load`). Use your MySQL database credentials to define the connection string for your database source. Use the Neon database connection string you retrieved and modified in the previous step as the destination. **Note**: If you need to specify an SSL mode in your connection string, the following format is recommended: `sslmode=require`. Other formats may not work. Example configuration in `config.load`: ```plaintext load database from mysql://user:password@host/source_db?sslmode=require into postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require; ``` ## Run the migration with pgloader To initiate the migration process, run: ```shell pgloader config.load ``` The command output will look similar to this: ```bash LOG report summary reset table name errors rows bytes total time ----------------------- --------- --------- --------- -------------- fetch meta data 0 2 0.727s Create Schemas 0 0 0.346s Create SQL Types 0 0 0.178s Create tables 0 2 0.551s Set Table OIDs 0 1 0.094s ----------------------- --------- --------- --------- -------------- "db-test".dbname 0 1 0.0 kB 0.900s ----------------------- --------- --------- --------- -------------- COPY Threads Completion 0 4 0.905s Index Build Completion 0 1 0.960s Create Indexes 0 1 0.257s Reset Sequences 0 0 1.083s Primary Keys 0 1 0.263s Create Foreign Keys 0 0 0.000s Create Triggers 0 0 0.169s Set Search Path 0 1 0.427s Install Comments 0 0 0.000s ----------------------- --------- --------- --------- -------------- Total import time ✓ 1 0.0 kB 4.064s ``` ## SSL verify error If you encounter an `SSL verify error: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY` error while attempting the instructions described above using `pgloader` from a Docker container, try the solution identified in this [GitHub issue](https://github.com/dimitri/pgloader/issues/768#issuecomment-693390290), which involves specifying `sslmode=allow` in the Postgres connection string and using the `--no-ssl-cert-verification` option with `pgloader`. The following configuration file and Docker command were verified to work with Docker on Windows but may apply generally when using `pgloader` in a Docker container. In your `pgloader` config file, replace the MySQL and Postgres connection string values with your own. In the Docker command, specify the path to your `pgloader` config file, and replace the container ID value (the long alphanumeric string) with your own. `pgloader` config.load file: ```plaintext load database from mysql://user:password@host/source_db?sslmode=require&channel_binding=require into postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=allow; ``` Docker command: ```plaintext docker run -v C:\path\to\config.load:/config.load d183dc100d3af5e703bd867b3b7826c117fa16b7ee2cd360af591dc895b121dc pgloader --no-ssl-cert-verification /config.load ``` ## References - [Installing pgloader](https://pgloader.readthedocs.io/en/latest/install.html) - [Pgloader Tutorial: Migrating from MySQL to PostgreSQL](https://pgloader.readthedocs.io/en/latest/tutorial/tutorial.html#migrating-from-mysql-to-postgresql) - [Pgloader MySQL to Postgres Guide](https://pgloader.readthedocs.io/en/latest/ref/mysql.html) - [How to Migrate from MySQL to PostgreSQL RDBMS: An Enterprise Approach](https://jfrog.com/community/data-science/how-to-migrate-from-mysql-to-postgresql-rdbms-an-enterprise-approach/) --- # Source: https://neon.com/llms/import-migrate-schema-only.txt # Migrate a database schema > The document outlines the process for migrating a database schema to Neon, detailing steps for exporting the schema from an existing database and importing it into a Neon database instance. ## Source - [Migrate a database schema HTML](https://neon.com/docs/import/migrate-schema-only): The original HTML version of this documentation This topic shows how to perform a schema-only migration using the `pg_dump` and `pg_restore` Postgres utilities. A schema-only migration may be necessary in certain scenarios. For example, when replicating data between two Postgres instances, the tables defined in your publication on the source database must also exist in the destination database, and they must have the same table names and columns. A schema dump and reload in this case may be faster than trying to manually create the required schema on the destination database. ## Dump the schema To dump only the schema from a database, you can run a `pg_dump` command similar to the following to create an `.sql` dump file with the schema only: ```sql pg_dump --schema-only \ --no-privileges \ "postgresql://role:password@hostname:5432/dbname" \ > schema_dump.sql ``` - With the `--schema-only` option, only object definitions are dumped. Data is excluded. - The `--no-privileges` option prevents dumping privileges. Neon may not support the privileges you've defined elsewhere, or if dumping a schema from Neon, there maybe Neon-specific privileges that cannot be restored to another database. **Tip**: - When you're dumping or restoring on Neon, you can input your Neon connection string in place of `postgresql://role:password@hostname:5432/dbname`. You can find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. ## Review and modify the dumped schema After dumping a schema to an `.sql` file, review it for statements that you don't want to replicate or that won't be supported on your destination database, and comment them out. For example, when dumping a schema from AlloyDB, you might see statements like the ones shown below, which you can comment out if you're loading the schema into Neon, where they won't be supported. Generally, you should remove any parameters configured on another Postgres provider and rely on Neon's default Postgres settings. If you are replicating a large dataset, also consider removing any `CREATE INDEX` statements from the resulting dump file to avoid creating indexes when loading the schema on the destination database (the subscriber). Taking indexes out of the equation can substantially reduce the time required for initial data load performed when starting logical replication. Save the `CREATE INDEX` statements that you remove. You can add the indexes back after the initial data copy is completed. **Note**: To comment out a single line, you can use `--` at the beginning of the line. ```sql -- SET statement_timeout = 0; -- SET lock_timeout = 0; -- SET idle_in_transaction_session_timeout = 0; -- SET client_encoding = 'UTF8'; -- SET standard_conforming_strings = on; -- SELECT pg_catalog.set_config('search_path', '', false); -- SET check_function_bodies = false; -- SET xmloption = content; -- SET client_min_messages = warning; -- SET row_security = off; -- ALTER SCHEMA public OWNER TO alloydbsuperuser; -- CREATE EXTENSION IF NOT EXISTS google_columnar_engine WITH SCHEMA public; -- CREATE EXTENSION IF NOT EXISTS google_db_advisor WITH SCHEMA public; ``` ## Load the schema After making any necessary modifications, load the dumped schema using `psql`: ```sql psql \ "postgresql://role:password@hostname:5432/dbname" \ < schema_dump.sql ``` After you've loaded the schema, you can view the result with this `psql` command: ```sql \dt ``` --- # Source: https://neon.com/llms/import-migrate-sqlite.txt # Migrate from SQLite to Neon Postgres > The document outlines the process for migrating a database from SQLite to Neon Postgres, detailing steps for exporting SQLite data and importing it into a Neon Postgres instance. ## Source - [Migrate from SQLite to Neon Postgres HTML](https://neon.com/docs/import/migrate-sqlite): The original HTML version of this documentation This guide describes how to migrate your SQLite database to Neon Postgres using [pgloader](https://pgloader.readthedocs.io/en/latest/intro.html) `pgloader` is an open-source data loading and migration tool that efficiently transfers data from various sources (like CSV, MySQL, SQLite, MS SQL, etc.) into Postgres, handling schema and data transformations on the fly. We'll use it to migrate a sample SQLite database to Neon Postgres ## Prerequisites Before you begin, ensure you have the following: - A Neon account and a project. If you don't have one, see [Sign up](https://neon.com/docs/get-started/signing-up). - A database created in your Neon project. For instructions, see [Create a database](https://neon.com/docs/manage/databases#create-a-database). - The file path to your source SQLite database file. If you don't have one, you can create a sample database in the next step. - Neon's Free plan supports 0.5 GB of data. If your data size is more than 0.5 GB, you'll need to upgrade to one of Neon's paid plans. See [Neon plans](https://neon.com/docs/introduction/plans) for more information. A review of the [pgloader SQLite to Postgres Guide](https://pgloader.readthedocs.io/en/latest/ref/sqlite.html) is also recommended. It provides a comprehensive overview of `pgloader`'s capabilities. ## Understanding SQLite and Postgres data types Before migrating from SQLite to Postgres, it's helpful to understand a key difference in how they handle data types: - **SQLite** uses a flexible typing system called "type affinity". You can store any type of data in any column, regardless of its declared type. For example, you can store the text "hello" in a column declared as `INTEGER`. The declared type is only a suggestion. - **Postgres** uses a strict, static typing system. Data inserted into a column must precisely match the column's declared data type. An attempt to store "hello" in an `INTEGER` column will result in an error. When converting a database, SQLite's type affinities are mapped to appropriate Postgres types. Here is a summary of the common mappings: | Data Type Category | SQLite | PostgreSQL | Key Differences & Notes | | :------------------------------ | :-------------------------------------- | :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Integer** | `INTEGER` | `SMALLINT` (2 bytes) `INTEGER` (4 bytes) `BIGINT` (8 bytes) | SQLite's `INTEGER` is a flexible-size signed integer, storing values in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value. PostgreSQL offers fixed-size integers for more granular control over storage and performance. | | **Auto-incrementing Integer** | `INTEGER PRIMARY KEY` | `SMALLSERIAL` (2 bytes) `SERIAL` (4 bytes) `BIGSERIAL` (8 bytes) | In SQLite, declaring a column as `INTEGER PRIMARY KEY` automatically makes it an alias for the `rowid` and thus auto-incrementing. PostgreSQL provides the `SERIAL` pseudo-types, which create a sequence object to generate unique identifiers. | | **Floating-Point** | `REAL` | `REAL` (4 bytes) `DOUBLE PRECISION` (8 bytes) | SQLite's `REAL` is an 8-byte IEEE floating-point number. PostgreSQL offers both single-precision (`REAL`) and double-precision (`DOUBLE PRECISION`) floating-point numbers. | | **Arbitrary Precision Numeric** | `NUMERIC` | `NUMERIC(precision, scale)` `DECIMAL(precision, scale)` | SQLite's `NUMERIC` affinity will attempt to store data as `INTEGER` or `REAL`, or as `TEXT` if it cannot be losslessly converted. PostgreSQL's `NUMERIC` and `DECIMAL` types are for exact decimal arithmetic, crucial for financial and scientific applications, allowing for user-defined precision and scale. | | **String** | `TEXT` `VARCHAR(n)` `CHAR(n)` | `TEXT` `VARCHAR(n)` `CHAR(n)` | While both databases accept these type names, in SQLite, they all have a `TEXT` affinity. The length `(n)` is not enforced in SQLite. In PostgreSQL, `VARCHAR(n)` enforces a maximum length, and `CHAR(n)` is a fixed-length, blank-padded string. `TEXT` in PostgreSQL has no predefined length limit. | | **Binary Data** | `BLOB` | `BYTEA` | Both are used for storing raw binary data. | | **Date & Time** | `TEXT` `REAL` `INTEGER` | `DATE` `TIME` `TIMESTAMP` `TIMESTAMPTZ` (with time zone) `INTERVAL` | SQLite has no dedicated date/time storage class; they are typically stored as `TEXT` (ISO-8601 strings), `REAL` (Julian day numbers), or `INTEGER` (Unix timestamps). PostgreSQL provides a rich set of specific date and time types with built-in functions for complex date and time arithmetic and time zone handling. | | **Boolean** | `INTEGER` (0 for false, 1 for true) | `BOOLEAN` | SQLite does not have a native boolean type and commonly uses `INTEGER` with values 0 and 1. PostgreSQL has a dedicated `BOOLEAN` type that stores `true` or `false`. | | **JSON** | `TEXT` | `JSON` `JSONB` | In SQLite, JSON data is stored as `TEXT`. PostgreSQL offers two dedicated JSON types: `JSON` for storing the raw JSON text and `JSONB` for a decomposed binary format that is more efficient for indexing and querying. | | **Unique Identifier** | - | `UUID` | PostgreSQL has a dedicated `UUID` data type for storing Universally Unique Identifiers, which is not present in SQLite. | | **Array** | - | `data_type[]` | PostgreSQL supports arrays of any built-in or user-defined data type, a powerful feature for storing lists of values in a single column. SQLite does not have a native array type. | ## Create a sample SQLite database (Optional) If you don't have a database to migrate, you can create a sample database for this tutorial. This requires the `sqlite3` command-line tool, typically pre-installed on macOS and Linux. 1. Create a file named `seed.sql`. This schema defines `authors` and `books` tables, including a `published_date` column stored as `TEXT` to demonstrate type casting. ```sql -- Create the authors table CREATE TABLE authors ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, bio TEXT ); -- Create the books table CREATE TABLE books ( id INTEGER PRIMARY KEY, author_id INTEGER NOT NULL, title TEXT NOT NULL, published_date TEXT, rating REAL, FOREIGN KEY (author_id) REFERENCES authors (id) ); -- Insert sample data INSERT INTO authors (id, name, bio) VALUES (1, 'George Orwell', 'Author of dystopian classics.'), (2, 'J.R.R. Tolkien', 'Author of high-fantasy epics.'), (3, 'Jane Austen', 'Renowned for her romantic fiction.'); INSERT INTO books (author_id, title, published_date, rating) VALUES (1, '1984', '1949-06-08', 4.8), (1, 'Animal Farm', '1945-08-17', 4.5), (2, 'The Hobbit', '1937-09-21', 4.9), (2, 'The Lord of the Rings', '1954-07-29', 5.0), (3, 'Pride and Prejudice', '1813-01-28', 4.7); ``` 2. Create the SQLite database `sample_library.db` from the schema file: ```shell sqlite3 sample_library.db < seed.sql ``` You now have a `sample_library.db` file ready for migration. **Note** Using Turso?: If you're using Turso, you can dump your database to a SQL file using the [Turso CLI](https://docs.turso.tech/cli/introduction) and then follow the rest of this guide: ```shell turso db shell .dump > seed.sql # Generate a SQLite database file from the SQL dump sqlite3 sample_library.db < seed.sql ``` For more details on database dumps, see the [Turso CLI documentation](https://docs.turso.tech/cli/db/shell#database-dump). Now that you have your Neon database and SQLite database ready, you can use `pgloader` to migrate the data. Follow these steps: ## Retrieve your Neon database connection string Log in to the Neon Console. Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. It should look similar to this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require ``` **Important**: You will need to remove `&channel_binding=require` from the connection string, as `pgloader` does not support channel binding and throws an error when it is present. Now, modify this connection string to pass your **endpoint ID** (`ep-cool-darkness-123456` in this example) to Neon with your password using the `endpoint` keyword, as shown here: ```bash postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require ``` **Note**: Passing the `endpoint ID` with your password is a required workaround for some Postgres drivers, including the one used by `pgloader`. For more information, see [Connect with an endpoint ID](https://neon.com/docs/connect/connection-errors#d-specify-the-endpoint-id-in-the-password-field). Keep your modified Neon connection string handy. ## Install pgloader Install the `pgloader` utility using your preferred method: - For **macOS** with Homebrew: `brew install pgloader` - For **Debian/Ubuntu**: `sudo apt-get install pgloader` - For **Docker**: Pull the latest image with `docker pull dimitri/pgloader:latest` For other systems, see [Installing pgloader](https://pgloader.readthedocs.io/en/latest/install.html). ## Run a simple migration For a basic migration, you can run `pgloader` directly from the command line. This command uses `pgloader`'s default settings to migrate the `sample_library.db` schema and data. ```shell pgloader sqlite://sample_library.db "postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require" ``` > Make sure to enclose the Postgres connection string in quotes to prevent shell interpretation issues. The command output will look similar to this: ```bash 2025-06-27T08:26:19.941000Z LOG report summary reset table name errors rows bytes total time ----------------------- --------- --------- --------- -------------- fetch 0 0 0.000s fetch meta data 0 5 0.204s Create Schemas 0 0 0.108s Create SQL Types 0 0 0.222s Create tables 0 4 1.307s Set Table OIDs 0 2 0.121s ----------------------- --------- --------- --------- -------------- authors 0 3 0.1 kB 1.082s books 0 5 0.2 kB 0.993s ----------------------- --------- --------- --------- -------------- COPY Threads Completion 0 4 1.080s Index Build Completion 0 2 2.342s Create Indexes 0 2 0.662s Reset Sequences 0 0 1.297s Primary Keys 0 2 0.650s Create Foreign Keys 0 1 0.339s Create Triggers 0 0 0.211s Install Comments 0 0 0.000s ----------------------- --------- --------- --------- -------------- Total import time ✓ 8 0.3 kB 6.581s ``` This is quick, but it will create primary key columns as `bigint` rather than `serial`, and the `published_date` column will remain `text`. This is expected behavior, as `pgloader` uses SQLite's type affinities directly. ## Advanced migration with custom casting For fine-grained control, a `pgloader` load file is the best approach. Here, we'll create a load file that uses the `CAST` clause to: 1. Convert `INTEGER PRIMARY KEY` columns to `SERIAL`. This makes the Postgres schema cleaner and more idiomatic. 2. Cast the `TEXT` `published_date` column to the native `DATE` type in Postgres. Create a file named `sqlite_advanced.load` with the following content. Replace the Neon connection string and file path if necessary. ```sql LOAD DATABASE FROM sqlite://sample_library.db INTO postgresql://alex:endpoint=ep-cool-darkness-123456;AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require WITH include drop, create tables, create indexes, reset sequences, downcase identifiers CAST -- Cast specific primary key columns to SERIAL for auto-incrementing column authors.id to serial, column books.id to serial, -- Cast text column to date; pgloader handles ISO 8601 format ('YYYY-MM-DD') automatically column books.published_date to date; ``` Now, run the migration using this advanced load file: ```shell pgloader sqlite_advanced.load ``` The migration will now produce a more refined Postgres schema, with `SERIAL` primary keys and a proper `DATE` column. ## Post-migration verification After migrating, always verify your data. One critical area is auto-incrementing primary keys. ### Verify sequences The `reset sequences` option in the load file ensures that auto-incrementing columns start from the correct value. You can verify this manually. Connect to your Neon database using [`psql`](https://neon.com/docs/connect/query-with-psql-editor) or [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) and check the next value for the `books` table's sequence: ```sql SELECT nextval(pg_get_serial_sequence('books', 'id')); ``` This should return a value one higher than the max `id` in the `books` table (e.g., `6` for our sample data). If it doesn't, you can reset it manually with this command: ```sql SELECT setval( pg_get_serial_sequence('books', 'id'), (SELECT MAX(id) FROM books) + 1 ); ``` ## Troubleshooting ### SSL verify error with Docker If you run `pgloader` from a Docker container and encounter an `SSL verify error: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY`, you may need to disable SSL certificate verification. Modify your load file to set `sslmode=allow` in the Postgres connection string. ```sql LOAD DATABASE FROM sqlite:////data/sample_library.db INTO postgresql://.../dbname?sslmode=allow; ... ``` Then, run the Docker command with the `--no-ssl-cert-verification` flag. Mount your database and load files into the container's `/data` directory. ```shell docker run --rm -v /path/to/your/files:/data dimitri/pgloader:latest pgloader --no-ssl-cert-verification /data/sqlite_advanced.load ``` ## References - [pgloader Documentation](https://pgloader.readthedocs.io/en/latest/) - [pgloader Reference: SQLite to Postgres](https://pgloader.readthedocs.io/en/latest/ref/sqlite.html) - [pgloader CLI Reference](https://pgloader.readthedocs.io/en/latest/pgloader.html) --- # Source: https://neon.com/llms/import-pgcopydb.txt # Migrate data to Neon Postgres using pgcopydb > The document outlines the process of migrating data to Neon Postgres using the `pgcopydb` tool, detailing steps for setting up the environment, executing the data transfer, and verifying the migration. ## Source - [Migrate data to Neon Postgres using pgcopydb HTML](https://neon.com/docs/import/pgcopydb): The original HTML version of this documentation What you will learn: - Why use pgcopydb - Setting up environment variables for migration - Monitoring the migration process - Advanced usage options Repo: - [pgcopydb GitHub repository](https://github.com/dimitri/pgcopydb) Related docs: - [pgcopydb documentation](https://pgcopydb.readthedocs.io/) `pgcopydb` is an open-source tool for copying Postgres databases from one server to another. It's a practical option for migrating larger Postgres databases into Neon. ## Why use pgcopydb for data migration? `pgcopydb` builds on standard `pg_dump` and `pg_restore` but with extra features to make migrations both faster and more reliable: - **Parallel migration**: `pgcopydb` processes multiple migration phases concurrently: - **Data transfer:** Streams data in parallel from multiple tables and splits large tables into chunks. This distributes the load and reduces migration time for large datasets. - **Index creation:** Builds indexes concurrently after data loading - **Constraint application:** Applies constraints in parallel while maintaining data integrity This parallel processing reduces migration time and minimizes downtime. - **Dependency handling**: `pgcopydb` manages database object dependencies and migrates them in the correct order: - **Schema-first approach:** Creates schema objects (tables, functions, procedures) before data transfer begins - **Table copying precedes indexes and constraints:** Copies table data first, then creates indexes and applies constraints This ordered approach maintains data integrity and avoids errors during migration. This guide walks you through using `pgcopydb` to migrate data to Neon. **Note**: Logical replication with `pgcopydb clone --follow` is not supported on Neon. You can still use `pgcopydb` for a one-time data migration to Neon. ## Prerequisites Before you begin, ensure you have the following: - **Source postgres database**: You need access to the Postgres database you intend to migrate. This can be a local instance, a cloud-hosted database (AWS RDS, GCP Cloud SQL, Azure Database for Postgres, or any other Postgres provider), or even a different Neon project. - **Neon project**: You must have an active Neon project and a database ready to receive the migrated data. If you don't have a Neon project yet, see [Create a Neon project](https://neon.com/docs/manage/projects#create-a-project) to get started. Note that storage beyond your plan's included amount will incur additional charges. - **pgcopydb installation**: `pgcopydb` must be installed on a machine that has network connectivity to both your source Postgres database and your Neon database. Check firewall rules and network configurations to allow traffic on the Postgres port. This machine should also have sufficient resources (CPU, memory, disk space) to handle the migration workload. Install `pgcopydb` by following the instructions in the [pgcopydb documentation](https://pgcopydb.readthedocs.io/en/latest/install.html). ## Setup environment variables Before proceeding, set the following environment variables for your source and target Postgres databases where you will run `pgcopydb` commands: ```bash export PGCOPYDB_SOURCE_PGURI="postgresql://source_user:source_password@source_host:source_port/source_db" export PGCOPYDB_TARGET_PGURI="postgresql://neon_user:neon_user_password@xxxx.neon.tech/neondb?sslmode=require&channel_binding=require" ``` You can replace the placeholders with your actual connection details. You can get Neon database connection details from the Neon console. `pgcopydb` will automatically use these environment variables for the migration. ## Start data migration Run the `pgcopydb clone` command with the `--no-owner` flag to skip ownership changes: ```bash pgcopydb clone --no-owner ``` **Tip**: When using `--no-owner` flag in `pgcopydb`, often pair it with `--no-acl`, especially if the source has custom ACLs or default privileges. This flag skips restoring permissions (`GRANT`/`REVOKE`, `ALTER DEFAULT PRIVILEGES`). This is crucial because the user connecting to the target database often lacks the high-level rights to reapply all source permissions. For example, even when migrating between Neon databases, the target user might get "permission denied" errors when trying to restore privileges involving administrative roles (like `cloud_admin`, `neon_superuser`), as they may lack permission to manage settings for those specific roles. This typically halts `pgcopydb` during the `pg_restore` phase. Using `--no-acl` avoids these specific permission errors and allows the migration to proceed smoothly. However, this means that any custom permissions set on the source database won't be replicated in the target database. You may need to manually set them up afterward. ## Monitor the migration progress You can monitor the migration progress using the `pgcopydb list progress` command. This command provides real-time updates on the migration status, including the number of rows copied and the current phase. You can either set the `--source` flag to your source database connection string or make use of the `PGCOPYDB_SOURCE_PGURI` environment variable. ```bash pgcopydb list progress --source "your-source-connection-string" --summary ``` After successful completion, you will see a summary of the migration steps and their durations, similar to the following: ```text Step Connection Duration Transfer Concurrency -------------------------------------------------- ---------- ---------- ---------- ------------ Catalog Queries (table ordering, filtering, etc) source 3s775 1 Dump Schema source 432ms 1 Prepare Schema target 26s 1 COPY, INDEX, CONSTRAINTS, VACUUM (wall clock) both 31s 12 COPY (cumulative) both 23s 73 MB 4 CREATE INDEX (cumulative) target 533ms 4 CONSTRAINTS (cumulative) target 244ms 4 VACUUM (cumulative) target 3s009 4 Reset Sequences both 2s223 1 Large Objects (cumulative) (null) 0ms 0 Finalize Schema both 18s 4 -------------------------------------------------- ---------- ---------- ---------- ------------ Total Wall Clock Duration both 1m17s 20 ``` ## Switch over your application to Neon Switch your application to Neon and validate the migration after `pgcopydb clone` completes. 1. **Stop writes to source database**: Halt write operations to your source database. 2. **Validate migration**: Use `pgcopydb compare schema` and `pgcopydb compare data` for validation. 3. **Update application connection string**: Point your application to your Neon database. ## Advanced usage `pgcopydb` offers several advanced options to optimize and customize your migration. Here are some key considerations: ### Boosting migration speed with parallelism `--table-jobs ` & `--index-jobs `: These options control the number of concurrent jobs for copying tables and creating indexes, respectively. For large databases, increasing these values is crucial for reducing migration time. ### Handling large tables efficiently `--split-tables-larger-than `: Automatically splits tables exceeding the specified size into smaller chunks for parallel import, dramatically accelerating migration of large datasets. Start with `1GB` or `500MB` and adjust based on your table sizes. **Example:** ```bash pgcopydb clone --table-jobs 8 --index-jobs 12 --split-tables-larger-than 500MB ``` This command will run the migration with **8** concurrent table jobs, **12** concurrent index jobs, and split tables larger than **500** MB into smaller chunks for parallel import. For more detail, see [Same-table Concurrency](https://pgcopydb.readthedocs.io/en/latest/concurrency.html#same-table-concurrency) in the `pgcopydb` docs. ### Filtering and selective migration `--filters `: Sometimes you only need to migrate a subset of your database. `--filters` allows you to precisely control which tables, indexes, or schemas are included in the migration. This is useful for selective migrations or excluding unnecessary data. For filter configuration and examples, see the [pgcopydb filtering documentation](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_config.html#filtering). --- # Source: https://neon.com/llms/introduction-about-billing.txt # Plans and billing > The "Plans and billing" document outlines Neon's pricing structure, detailing available subscription plans, billing cycles, and payment methods to assist users in managing their account expenses effectively. ## Source - [Plans and billing HTML](https://neon.com/docs/introduction/about-billing): The original HTML version of this documentation ## Neon plans - [Plans](https://neon.com/docs/introduction/plans): Learn about Neon's usage-based pricing plans and what's included - [Legacy plans](https://neon.com/docs/introduction/legacy-plans): A reference for users currently on these plans — not available for new signups ## Manage billing - [Manage billing](https://neon.com/docs/introduction/manage-billing): View and manage your monthly bill and learn how to change your plan - [Monitor billing and usage](https://neon.com/docs/introduction/monitor-usage): Learn how to monitor billing and usage in Neon - [Cost optimization](https://neon.com/docs/introduction/cost-optimization): Strategies to manage and reduce your Neon costs across compute, storage, and data transfer - [AWS Marketplace](https://neon.com/docs/introduction/billing-aws-marketplace): Find out how you can pay for Neon with your AWS Billing account - [Azure Marketplace](https://neon.com/docs/introduction/billing-azure-marketplace): Neon as an Azure Native Service with billing through Azure Marketplace ## Neon for Enterprise - [Neon for the Enterprise](https://neon.com/enterprise): Find out how Enterprises are maximizing engineering efficiency with Neon - [Neon Enterprise Sales Process](https://neon.com/docs/introduction/enterprise-sales-process): Learn about Neon's Enterprise sales process and what to expect --- # Source: https://neon.com/llms/introduction-agent-plan.txt # Agent plan structure and pricing > The document outlines the structure and pricing of Neon's agent plans, detailing various tiers and associated costs to help users select the appropriate plan for their needs. ## Source - [Agent plan structure and pricing HTML](https://neon.com/docs/introduction/agent-plan): The original HTML version of this documentation What you will learn: - How the agent plan is organized - How the agent plan works - How to use the Neon API to manage the plan Related topics: - [Neon platform integration](https://neon.com/docs/guides/platform-integration-intro) - [Neon database versioning](https://neon.com/docs/ai/ai-database-versioning) - [Neon agent program](https://neon.com/programs/agents) ## Overview The Neon agent plan provides infrastructure for platforms that deploy Postgres databases on behalf of end users. The plan uses a two-organization structure to separate free and paid user tiers, with each organization supporting up to 30,000 projects by default. ## Enrollment requirements To join the agent plan: - You must have an active Neon paid plan with a credit card on file - Your application requires approval from the Neon team - The Neon team handles all organization setup and configuration Initial enrollment is not self-service. Once approved, the Neon team configures both organizations and grants you admin access. After setup, you manage all projects and configurations independently via the [Neon API](https://neon.com/docs/reference/api-reference). ## Organization structure Neon creates two organizations in your account: ### Sponsored organization The sponsored organization hosts databases for your free-tier users at no cost to you. This organization includes the Scale plan features, but individual projects have resource limits similar to Neon's standard free tier. You are not charged for usage in this organization. Use this for users who haven't upgraded to your platform's paid plans. For an overview of Free plan limits and Scale plan features, see [Neon plans](https://neon.com/docs/introduction/plans). ### Paid organization The paid organization hosts databases for your paying users. This organization includes Scale plan features but with agent-specific pricing. Neon provides $25,000 in initial credits to cover usage charges. Compute is billed at $0.106 per hour (lower than standard Scale pricing). You can create your own internal tier structure within this organization, configuring different resource quotas for different user segments. Use this organization for users on your paid plans that need resource flexibility. ## Managing projects After initial enrollment, you have full control over both organizations as admin. Each organization supports 30,000 projects. All project operations are performed through the Neon API. You can: - Create and delete projects in either organization - Set per-project resource quotas - Monitor usage across all projects - Manage billing limits This enables fleet management at scale without manual intervention. You can request limit increases through your Neon contact when you approach capacity. ### Project transfers between organizations With the sponsored and paid organization structure of the agent plan, you can move user projects between organizations when they upgrade or downgrade tiers. Transferring projects between organizations requires a personal API key with access to both organizations. You can transfer up to 400 projects per request. See [transfer projects between organizations](https://neon.com/docs/manage/orgs-project-transfer) for details. ## Pricing The agent plan uses usage-based pricing with higher rate limits and dedicated support: | Resource | Agent plan | | ------------------------------- | ----------------------------------------------------------------------------------------------------- | | Projects | **Custom limits available** _Agents create a new project for each user application._ | | Branches per Project | **Custom limits available** _Agents use branches to quickly toggle between application states._ | | Compute | **$0.106 per CU-hour** _Same as Launch_ | | Storage | **$0.35 per GB-month** _Same as Launch/Scale_ | | Instant Restore (PITR) | **$0.2 per GB-month** _Same as Launch/Scale_ | | Management API | **Higher Rate Limits Available** _API for instant provisioning and management of databases_ | | Data API (PostgREST-compatible) | **Higher Rate Limits Available** | | Support | **Shared Slack Channel** | ## Billing model The paid organization receives $25,000 in initial credits that cover compute ($0.106/hour), storage, and data transfer charges. Usage is tracked per project, and the API exposes consumption metrics for building usage-based billing into your platform. The sponsored (free) organization has no billing charges. ### Consumption metrics Track compute time, storage, and network I/O per project to monitor usage and build billing logic. See the [consumption metrics guide](https://neon.com/docs/guides/consumption-metrics) for details. ## Program benefits The agent plan includes these benefits for participating platforms: | Benefit | Description | | -------------------------- | ------------------------------------------------------------------------------------- | | **Your Free Tier is free** | Neon sponsors up to 30,000 projects per month used in your sponsored tier. | | **General use credits** | Up to $25,000 in credits for those not eligible for the [Startup Program](https://neon.com/startups). | | **Co-Marketing** | Blog and social promotions, hackathons and more. | ## Getting started Once enrolled in the agent plan: 1. You'll receive admin access to both organizations (sponsored and paid) 2. Create projects in the appropriate organization based on your user's tier 3. Configure resource quotas per project as needed 4. Monitor usage and billing through the API For detailed API integration instructions, see [Neon for platforms documentation](https://neon.com/docs/guides/platform-integration-intro). --- # Source: https://neon.com/llms/introduction-architecture-overview.txt # Neon architecture > The document outlines Neon's architecture, detailing its cloud-native, multi-tenant design that separates storage and compute to enhance scalability and efficiency for PostgreSQL databases. ## Source - [Neon architecture HTML](https://neon.com/docs/introduction/architecture-overview): The original HTML version of this documentation Neon architecture is based on the separation of compute and storage and is orchestrated by the Neon Control Plane, which manages cloud resources across both storage and compute. A Neon compute runs Postgres, and storage is a multi-tenant key-value store for Postgres pages that is custom-built for the cloud. Neon storage consists of three main components: Safekeepers, Pageservers, and cloud object storage. Safekeepers are responsible for durability of recent updates. Postgres streams [Write-Ahead Log (WAL)](https://neon.com/docs/reference/glossary#wal) to the Safekeepers, and the Safekeepers store the WAL durably until it has been processed by the Pageservers and uploaded to a cloud object store. Pageservers are responsible for serving read requests. To do that, Pageservers process the incoming WAL stream into a custom storage format that makes all [page](https://neon.com/docs/reference/glossary#page) versions easily accessible. Pageservers also upload data to cloud object storage, and download the data on demand. Safekeepers can be thought of as an ultra-reliable write buffer that holds the latest data until it is processed and uploaded to cloud storage. Safekeepers implement the Paxos protocol for reliability. Pageservers also function as a read cache for cloud storage, providing fast random access to data pages. ## Durability Durability is at the core of Neon's architecture. As described earlier, incoming WAL data is initially stored across multiple availability zones in a [Paxos]() cluster before being uploaded to a cloud object store, such as [Amazon S3](https://aws.amazon.com/s3/) (99.999999999% durability), both in raw WAL and materialized form. Additional copies are maintained across Pageservers to enhance the read performance of frequently accessed data. Consequently, there are always multiple copies of your data in Neon, ensuring durability. ## Archive storage Archive storage in Neon, which enables [branch archiving](https://neon.com/docs/guides/branch-archiving) on the Free plan, optimizes storage resources by offloading data that's not being used. As described above, Neon's architecture includes Safekeepers, Pageservers, and cloud object storage. In this setup, the Pageservers are responsible for processing and uploading data to cloud object storage as soon as it's written. When a branch is archived, it does not involve moving data; instead, the branch's data is simply evicted from the Pageserver, freeing up Pageserver storage. This approach ensures that while archived data is readily available on demand in cost-efficient object storage, it's no longer taking up space in the more performant storage used by Neon's Pageservers. --- # Source: https://neon.com/llms/introduction-autoscaling-architecture.txt # Autoscaling architecture > The "Autoscaling Architecture" document outlines Neon's autoscaling system, detailing how it dynamically adjusts resources to handle varying workloads efficiently within the Neon database environment. ## Source - [Autoscaling architecture HTML](https://neon.com/docs/introduction/autoscaling-architecture): The original HTML version of this documentation What you will learn: - How Neon's autoscaling architecture is structured - The role of key components like the autoscaler-agent and Kubernetes scheduler Related topics: - [Introduction to autoscaling](https://neon.com/docs/introduction/autoscaling) - [Enabling autoscaling](https://neon.com/docs/guides/autoscaling-guide) - [How the algorithm works](https://neon.com/docs/guides/autoscaling-algorithm) A Neon project can have one or more computes, each representing an individual Postgres instance. Storage is decoupled from these computes, meaning that the Postgres servers executing queries are physically separate from the data storage location. This separation offers numerous advantages, including enablement of Neon's autoscaling feature. Looking more closely, you can see that each Postgres instance operates within its own virtual machine inside a [Kubernetes cluster](https://neon.com/docs/reference/glossary#kubernetes-cluster), with multiple VMs hosted on each node of the cluster. Autoscaling is implemented by allocating and deallocating [vCPU](https://neon.com/docs/reference/glossary#vcpu) and [RAM](https://neon.com/docs/reference/glossary#ram) to each VM. ## The autoscaler-agent Each [Kubernetes node](https://neon.com/docs/reference/glossary#kubernetes-node) hosts a single instance of the [autoscaler-agent](https://neon.com/docs/reference/glossary#autoscaler-agent), which serves as the control mechanism for Neon's autoscaling system. The agent collects metrics from the VMs on its node, makes scaling decisions, and performs the necessary checks and requests to implement those decisions. ## The Kubernetes scheduler A Neon-modified [Kubernetes scheduler](https://neon.com/docs/reference/glossary#kubernetes-scheduler) coordinates with the autoscaler-agent and is the single source of truth for resource allocation. The autoscaler-agent obtains approval for all upscaling from the scheduler. The scheduler maintains a global view of all resource usage changes and approves requests for additional resources from the autoscaler-agent or standard scheduling. In this way, the scheduler assumes responsibility for preventing overcommitting of memory resources. In the rare event that a node exhausts its resources, new pods are not scheduled on the node, and the autoscaler-agent is denied permission to allocate more resources. ## NeonVM Kubernetes does not natively support the creation or management of VMs. To address this, Neon uses a tool called [NeonVM](https://neon.com/docs/reference/glossary#neonvm). This tool is a custom resource definition and controller for VMs, handling tasks such as adding or removing CPUs and memory. Internally, NeonVM utilizes [QEMU](https://neon.com/docs/reference/glossary#qemu) and [KVM](https://neon.com/docs/reference/glossary#kvm) (where available) to achieve near-native performance. When an autoscaler-agent needs to modify a VM's resource allocation, it simply updates the corresponding NeonVM object in Kubernetes, and the VM controller then manages the rest of the process. ## Live migration In cases where a Kubernetes node becomes saturated, NeonVM manages the process of [live migrating](https://neon.com/docs/reference/glossary#live-migration) a VM, transferring the VM from one machine to another with minimal interruptions (typically around 100ms). Live migration transmits the internal state of the original VM to a new one while the former continues to operate, swiftly transitioning to the new VM after most of the data is copied. From within the VM, the only indication that a migration occurred might be a temporary performance reduction. Importantly, the VM retains its IP address, ensuring that connections are preserved and queries remain uninterrupted. The live migration process allows for the proactive reduction of node load by migrating VMs away before reaching capacity. Although it is still possible for the node to fill up in the interim, Neon's separation of storage and compute means that VMs typically use minimal disk space, resulting in fast migrations. ## Memory scaling Postgres memory consumption can escalate rapidly in specific scenarios. Fortunately, Neon's autoscaling system is able to detect memory usage increases without constantly requesting metrics from the VM. This is accomplished by running Postgres within a [cgroups](https://neon.com/docs/reference/glossary#cgroups), which provides notifications when memory usage crosses a specified threshold. Using cgroups in this way requires running our [vm-monitor](https://neon.com/docs/reference/glossary#vm-monitor) in the VM alongside Postgres to request more resources from the autoscaler-agent when Postgres consumes too much memory. The vm-monitor also verifies that downscaling requests from an autoscaler-agent will leave sufficient memory leftover. ## Local File Cache To expedite queries, the autoscaling system incorporates a Postgres extension that places a cache in front of the storage layer. Many queries benefit from this additional memory, particularly those requiring multiple database scans (such as creating an index). The [Local File Cache (LFC)](https://neon.com/docs/reference/glossary#local-file-cache) capitalizes on the additional memory allocated to the VM by dedicating a portion to the cache to itself. The cache is backed by disk and kept at a size intended to fit in the kernel page cache. Due to the storage model, writebacks are not required, resulting in near-instant evictions. The vm-monitor adjusts the LFC size when scaling occurs through the autoscaler-agent, ensuring seamless operation. ## Autoscaling source code To further explore Neon's autoscaling implementation, visit Neon's [autoscaling](https://github.com/neondatabase/autoscaling) GitHub repository. While not primarily designed for external use, Neon welcomes exploration and contributions. --- # Source: https://neon.com/llms/introduction-autoscaling.txt # Autoscaling > The document explains Neon's autoscaling feature, detailing how it automatically adjusts compute resources based on workload demands to optimize performance and resource utilization. ## Source - [Autoscaling HTML](https://neon.com/docs/introduction/autoscaling): The original HTML version of this documentation Neon's _Autoscaling_ feature dynamically adjusts the amount of compute resources allocated to a Neon compute in response to the current load, eliminating the need for manual intervention or restarts. The following visualization shows how Neon's autoscaling works throughout a typical day. The compute resources scale up or down based on demand, ensuring that your database has the necessary compute resources when it needs them, while conserving resources during off-peak times. To dive deeper into how Neon's autoscaling algorithm operates, visit [Understanding Neon's autoscaling algorithm](https://neon.com/docs/guides/autoscaling-algorithm). ## Autoscaling benefits Neon's Autoscaling feature offers the following benefits: - **On-demand scaling:** Autoscaling helps with workloads that experience variations over time, such as applications with time-based changes in demand or occasional spikes. - **Cost-effectiveness**: Autoscaling optimizes resource utilization, ensuring that you only use required resources, rather than over-provisioning to handle peak loads. - **Resource and cost control**: Autoscaling operates within a user-defined range, ensuring that your compute resources and associated costs do not scale indefinitely. - **No manual intervention or restarts**: After you enable autoscaling and set scaling limits, no manual intervention or restarts are required, allowing you to focus on your applications. ## Configuring autoscaling You can enable autoscaling for any compute instance, whether it's a primary compute or a read replica. Simply open the **Edit compute** drawer ([learn how](https://neon.com/docs/guides/autoscaling-guide)) for your compute and set the autoscaling range. This range defines the minimum and maximum compute sizes within which your compute will automatically scale. For example, you might set the minimum to 2 vCPUs with 8 GB of RAM and the maximum to 8 vCPUs with 32 GB of RAM. Your compute resources will dynamically adjust within these limits, never dropping below the minimum or exceeding the maximum, regardless of demand. We recommend regularly [monitoring](https://neon.com/docs/introduction/monitoring-page) your usage from the **Monitoring Dashboard** to determine if adjustments to this range are needed. For full details about enabling and configuring autoscaling, see [Enabling autoscaling](https://neon.com/docs/guides/autoscaling-guide). --- # Source: https://neon.com/llms/introduction-billing-aws-marketplace.txt # AWS Marketplace > The document outlines the process for Neon users to subscribe to and manage billing through the AWS Marketplace, detailing steps for account linking, subscription management, and billing integration specific to Neon's services. ## Source - [AWS Marketplace HTML](https://neon.com/docs/introduction/billing-aws-marketplace): The original HTML version of this documentation Neon supports billing through the **AWS Marketplace** for [Private Offers](https://aws.amazon.com/marketplace/partners/private-offers/) only, which are typically reserved for custom plans. If you are interested in exploring a custom plan with Neon, please reach out to our [Sales](https://neon.com/contact-sales) team. Neon [self-service pricing plans](https://neon.com/pricing) are currently not purchasable through the AWS Marketplace. You can only purchase these plans through Neon. If you have any questions about billing in general or require assistance, please reach out to [Neon Support](https://console.neon.tech/app/projects?modal=support). --- # Source: https://neon.com/llms/introduction-billing-azure-marketplace.txt # Azure Marketplace > The document outlines how Neon users can manage billing and subscriptions through the Azure Marketplace, detailing the steps for purchasing, configuring, and managing Neon services within the Azure platform. ## Source - [Azure Marketplace HTML](https://neon.com/docs/introduction/billing-azure-marketplace): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Transfer your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. What you will learn: - About Neon pricing plans and billing on Azure - About Microsoft Azure Consumption Commitment (MACC) support - How to change your plan Related resources: - [Neon on Azure](https://neon.com/docs/manage/azure) - [Deploying Neon on Azure](https://neon.com/docs/azure/azure-deploy) To get started, see [Deploying Neon on Azure](https://neon.com/docs/azure/azure-deploy). ## Neon pricing plans and overages Neon pricing plans include allowances for compute, storage, and projects. For details on each plan's allowances, see [Neon plans](https://neon.com/docs/introduction/legacy-plans). If you exceed these allowances on a paid plan, overage charges will apply to your monthly bill. You can track your usage on the **Billing** page in the Neon Console. For guidance, see [Monitoring Billing](https://neon.com/docs/introduction/monitor-usage). **Note**: Currently, only Neon [legacy plans](https://neon.com/docs/introduction/legacy-plans) are supported on Azure. Neon's latest [pricing plans](https://neon.com/docs/introduction/plans) will be introduced on Azure at a later date. ## Enterprise plan support on Azure Neon's **Enterprise Plan** is designed for large teams with unique requirements that aren't covered by Neon's self-serve plans. For details, see the [Enterprise Plan](https://neon.com/docs/introduction/plans#enterprise). To explore this option, contact our [Sales](https://neon.com/contact-sales) team to discuss a custom private offer available through the Azure Marketplace. ## Microsoft Azure Consumption Commitment (MACC) As an Azure Benefit Eligible partner on Azure Marketplace, Neon Postgres purchases made through the Azure Marketplace contribute towards your Microsoft Azure Consumption Commitment (MACC). This means that any spending on Neon Postgres through Azure Marketplace will help fulfill your organization's committed Azure spend. ### How it works - When you purchase Neon Postgres via Azure Marketplace, the cost is billed through your Microsoft Azure subscription. - These charges are eligible to count toward your MACC, helping you maximize your existing commitment to Azure. - There are no additional steps required—your eligible Neon Postgres spend is automatically applied to your MACC. For more details on how MACC applies to marketplace purchases, see [Microsoft's documentation on MACC](https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit) ## Changing your pricing plan Changing the Neon pricing plan for an Azure subscription involves the following steps: 1. Navigate to the [Azure portal](https://portal.azure.com/) and sign in. 2. Locate your Neon Serverless Postgres resource by searching for it at the top of the page or locating it under **Resources** or **Navigate** > **All resources**. 3. Select your Neon resource to open the **Overview** page. 4. Select the **Change Plan** tab. This will open the **Change Plan** drawer where you can select from available Neon plans. A description of what's included in each plan is provided in the **Description** column in the drawer, but for more information about Neon plans, please visit our [Pricing](https://neon.com/pricing) page. 5. Click **Change Plan** to complete the plan change. ## Stop billing on Azure To stop billing for Neon on Azure, you can remove your Neon resource. For instructions, see [Deleting a Neon resource in Azure](https://neon.com/docs/azure/azure-manage#deleting-a-neon-resource-in-azure). ## Questions? If you have questions or need further guidance regarding billing through Azure Marketplace, please [reach out to us](https://neon.com/contact-sales). --- # Source: https://neon.com/llms/introduction-branch-restore.txt # Instant restore > The "Instant Restore" documentation explains how Neon users can quickly restore a database branch to a previous state, facilitating efficient data recovery and management within the Neon platform. ## Source - [Instant restore HTML](https://neon.com/docs/introduction/branch-restore): The original HTML version of this documentation What You'll Learn: - Restore data to any point in time - Querying historical data Related docs: - [Configure restore window](https://neon.com/docs/manage/projects#configure-your-restore-window) With Neon's instant restore capability, also known as point-in-time restore or PITR, you can easily restore a branch to an earlier state in its own or another branch's history. You can use Time Travel Assist to connect to a specific point in your restore window, where you can run read-only queries to pinpoint the exact moment you need to restore to. You can also use Schema Diff to get a side-by-side, GitHub-style visual comparison of your selected branches before restoring. ## How instant restore works ### Restore from history The restore operation lets you revert the state of a selected branch to an earlier point in time in its own or another branch's history, using time and date or Log Sequence Number (LSN). For example, you can revert to a state just before a data loss occurred. The default restore window for a Neon project differs by plan. You can revert a branch to any time within your configured [restore window](https://neon.com/docs/manage/projects#configure-your-restore-window), down to the millisecond. A few key points to keep in mind about the restore operation: - [Restore backups are created automatically in case you make a mistake](https://neon.com/docs/introduction/branch-restore#automatic-backups) - [Current data is overwritten](https://neon.com/docs/introduction/branch-restore#overwrite-not-a-merge) - [All databases on a branch are restored](https://neon.com/docs/introduction/branch-restore#changes-apply-to-all-databases) - [Connections to the selected branch are temporarily interrupted](https://neon.com/docs/introduction/branch-restore#connections-temporarily-interrupted) #### Automatic backups In case you need to rollback a restore, Neon preserves the branch's final state before the restore operation in an automatically created backup branch, which takes the following format: ``` {branch_name}_old_{head_timestamp} ``` You can use this backup to rollback the restore operation if necessary. The backup branches are listed on the **Branches** page in the Neon Console among your other branches. When restoring a root branch (like `production`), both the restored branch and the backup branch become separate root branches with no parent-child relationship. When restoring a non-root branch, the backup becomes the parent of the restored branch. #### Overwrite, not a merge It is important to understand that whenever you restore a branch, you are performing a _complete_ overwrite, not a merge or refresh. Everything on your current branch, data and schema, is replaced with the contents from the historical source. All data changes from the selected restore point onwards are excluded from the branch. #### Changes apply to all databases A reminder that in Neon's [object hierarchy](https://neon.com/docs/manage/overview), a branch can include any number of databases. Keep this in mind when restoring branches. For example, let's say you want to restore lost data in a given database. If you restore your branch to an earlier point in time before the data loss occurred, the operation applies to _all_ databases on the branch, not just the one you are troubleshooting. You can expect the restore operation to last a few seconds. In general, Neon recommends that you avoid creating too many databases in a single Neon project. If you have multiple, distinct applications, each one deserves its own Neon project. A good rule of thumb: use one Neon project per source code repository. #### Connections temporarily interrupted Existing connections to the selected branch are temporarily interrupted during the restore operation. However, your connection details do not change. Applications can automatically re-establish their database connections as soon as the restore operation is finished. #### Technical details Neon is open source and built in public, so if you are interested in understanding the technical implementation behind instant restore, see the details below. Details: View technical details Similar to the manual restore operation using the Neon Console and API described [here](https://neon.com/docs/guides/branching-pitr), the Restore operation performs a similar set of actions, but automatically: 1. On initiating a restore action, Neon builds a new point-in-time branch by matching your selected timestamp to the corresponding LSN of the relevant entries in the shared WAL record. 1. The compute for your initial branch is moved to this new branch so that your connection string remains stable. 1. We rename your new branch to the exact name as your initial branch, so the effect is seamless; it looks and acts like the same branch. 1. Your initial branch, which now has no compute attached to it, is renamed to _branch_name_old_head_timestamp_ to keep the pre-restore branch available should you need to roll back. Note that the initial branch was the parent for your new branch, and this is reflected when you look at your branch details. ### Time Travel Assist Use Time Travel Assist to make sure you've targeted the correct restore point before you restore your branch. See [Time Travel Assist](https://neon.com/docs/guides/time-travel-assist) to learn more. ## How to use instant restore You can use the Neon Console, CLI, or API to restore branches. Tab: Console ### Restoring from history Use the **Restore** page to restore a branch to an earlier timestamp in its history. First, select the **Branch to restore**. This is the target branch for the restore operation. #### To restore a branch from its own history: 1. Make sure the **From history** tab is selected. 1. Choose your timestamp or switch to LSN. 1. Click **Next**. A confirmation window opens giving you details about the pending restore operation. Review these details to make sure you've made the correct selections. 1. Click **Restore** to complete the operation. #### To restore from another branch: 1. Switch to the **From another branch** tab. 1. Select the source branch that you want to restore data from. 1. By default, the operation pulls the latest data from the source branch. If you want to pull from an earlier point in time, disable **Restore from latest data (head)**. The timestamp selector will appear. 1. Choose your timestamp or switch to the LSN input. 1. Click **Next**, confirm the details of the operation, then click **Restore** to complete. All databases on the selected branch are instantly updated with the data and schema from the chosen point in time. From the **Branches** page, you can now see a backup branch was created with the state of the branch at the restore point in time. Tab: CLI Using the CLI, you can restore a branch to an earlier point in its history or another branch's history using the following command: ```bash neon branches restore ``` In the `target id|name` field, specify the ID or name of the branch you want to restore. In the `source id|name timestamp|lsn` field, specify the source branch you want to restore from (mandatory), along with the point-in-time identifier (optional), which can be either an RFC 3339-formatted timestamp or the LSN. If you omit the point-in-time identifier, the operation defaults to the latest data (HEAD) for the source branch. Concatenate the source identifier and time identifier with `@`: for example, `development@2023-12-12T12:00:00Z`. #### Restore a branch to its own history If you want to restore a branch to an earlier point in time, use the syntax `^self` in the `` field. For example: ```bash neon branches restore development ^self@2024-01-01T00:00:00Z --preserve-under-name development_old ``` This command resets the target branch `development` to its state at the start of 2024. The command also preserves the original state of the branch in a backup file called `development_old` using the `preserve-under-name` parameter (mandatory when resetting to self). #### Restore from parent If you want to restore a target branch from its parent, you can use the special syntax `^parent` in the `` field. For example: ```bash neon branches restore development ^parent ``` This command will restore the target branch `development` to the latest data (HEAD) of its parent branch. #### Restore to another branch's history Here is an example of a command that restores a target branch to an earlier point in time of another branch's history: ```bash neon branches restore development production@0/12345 ``` This command will restore the target branch `development` to an earlier point in time from the source branch `production`, using the LSN `0/12345` to specify the point in time. If you left out the point-in-time identifier, the command would default to the latest data (HEAD) for the source branch `production`. For full CLI documentation for `branches restore`, see [branches restore](https://neon.com/docs/reference/cli-branches#restore). Tab: API To restore a branch using the API, use the endpoint: ```bash POST /projects/{project_id}/branches/{branch_id_to_restore}/restore ``` This endpoint lets you restore a branch using the following request parameters: | Parameter | Type | Required | Description | | ----------------------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **source_branch_id** | `string` | Yes | The ID of the branch you want to restore from. To restore to the latest data (head), omit `source_timestamp` and `source_lsn`. To restore a branch to its own history (`source_branch_id` equals branch's own Id), you must include: - A time period: `source_timestamp` or `source_lsn` - A backup branch: `preserve_under_name` | | **source_lsn** | `string` | No | A Log Sequence Number (LSN) on the source branch. The branch will be restored with data up to this LSN. | | **source_timestamp** | `string` | No | A timestamp indicating the point in time on the source branch to restore from. Use RFC 3339 format for the date-time string. | | **preserve_under_name** | `string` | No | If specified, a backup is created: the latest version of the branch's state is preserved under a new branch using the specified name. **Note:** This field is required if: - The branch has children. All child branches will be moved to the newly created branch. - You are restoring a branch to its own history (`source_branch_id` equals the branch's own ID). | #### Restoring a branch to its own history In the following example, we are restoring branch `br-twilight-river-31791249` to an earlier point in time, `2024-02-27T00:00:00Z`, with a new backup branch named `backup-before-restore`. Note that the branch id in the `url` matches the value for `source_branch_id`. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/floral-disk-86322740/branches/br-twilight-river-31791249/restore \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "source_branch_id": "br-twilight-river-31791249", "source_timestamp": "2024-02-27T00:00:00Z", "preserve_under_name": "backup-before-restore" } ' | jq ``` ### Restoring to the latest data from another branch In this example, we are restoring a development branch `dev/alex` (branch ID `br-twilight-river-31791249`) to the latest data (head) of its parent branch `br-jolly-star-07007859`. Note that we don't include any time identifier or backup branch name; this is a straight reset of the branch to the head of its parent. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/floral-disk-86322740/branches/br-twilight-river-31791249/restore \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "source_branch_id": "br-jolly-star-07007859"} ' | jq ``` ### Restoring to the earlier state of another branch In this example, we are restoring branch `dev/jordan` (branch ID `br-damp-smoke-91135977`) to branch `dev/alex` (branch ID `br-twilight-river-31791249`) at the point in time of `Feb 26, 2024 12:00:00.000 AM`. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/floral-disk-86322740/branches/br-damp-smoke-91135977/restore \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "source_branch_id": "br-jolly-star-07007859", "source_timestamp": "2024-02-26T12:00:00Z" } ' | jq ``` To make sure you choose the right restore point, we encourage you to use [Time Travel Assist](https://neon.com/docs/guides//time-travel-assist) before running a restore job, but the backup branch is there if you need it. If you do need to revert your changes, you can [Reset from parent](https://neon.com/docs/manage/branches#reset-a-branch-from-parent) since that is your branch's relationship to the restore point backup. ## Deleting backup branches You can delete a backup branch created by a restore operation on your project's root branch. Your project's root branch is typically named `production` unless you've renamed it. However, removing a backup branch created by a restore operation on a non-root branch (a child branch of `production`) is not yet supported. To delete a backup branch: 1. Navigate to the **Branches** page. 2. Find the backup branch you want to delete. It will have a name with the following format, where `branch_name` is typically `production`. ``` {branch_name}_old_{head_timestamp} ``` 3. Select **Delete** from the menu. If you cannot delete a backup branch because the backup branch was created by a restore operation on a non-root branch, you can still free up its storage space. If you're certain you no longer need the data in a backup branch, connect to the branch and drop its databases or tables. **Be sure to connect to the correct branch when doing this**. You can connect to a backup branch just like any other branch via the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or an SQL client like [psql](https://neon.com/docs/connect/query-with-psql-editor). To keep your **Branches** page organized, consider renaming backup branches that you plan to keep. For example, you can prefix their names with a `z` to move them to the bottom of the list. See [Rename a branch](https://neon.com/docs/manage/branches#rename-a-branch) for details. ## Billing considerations There are minimal impacts to billing from the instant restore and Time Travel Assist features: - **Instant restore** — The backups created when you restore a branch do add to your total number of branches, but since they do not have a compute attached they do not add to consumption costs. - **Time Travel Assist** — Costs related to Time Travel queries are minimal. See [Billing considerations](https://neon.com/docs/guides/time-travel-assist#billing-considerations). ## Limitations - Deleting backup branches is only supported for backups created by restore operations on root branches. See [Deleting backup branches](https://neon.com/docs/introduction/branch-restore#deleting-backup-branches) for details. - [Reset from parent](https://neon.com/docs/manage/branches#reset-a-branch-from-parent) restores from the parent branch, which may be a backup branch if you performed a restore operation on the parent branch. For example, let's say you have a `production` branch with a child development branch `development`. You are working on `development` and decide to restore to an earlier point in time to fix something during development. At this point, `development`'s parent switches from `production` to the backup `development_old_timestamp`. A day later, you want to refresh `development` with the latest data from `production`. You can't use **Reset from parent**, since the backup is now the parent. Instead, use **Instant restore** and select the original parent `production` as the source. --- # Source: https://neon.com/llms/introduction-branching.txt # Branching > The "Branching" documentation for Neon explains how to create and manage database branches, enabling users to experiment with data without affecting the main production environment. ## Source - [Branching HTML](https://neon.com/docs/introduction/branching): The original HTML version of this documentation With Neon, you can quickly and cost-effectively branch your data for development, testing, and various other purposes, enabling you to improve developer productivity and optimize continuous integration and delivery (CI/CD) pipelines. You can also rewind your data or create branches from the past to recover from mistakes or analyze historical states. ## What is a branch? A branch is a copy-on-write clone of your data. You can create a branch from a current or past state. For example, you can create a branch that includes all data up to the current time or an earlier time. **Tip** working with sensitive data?: Neon also supports schema-only branching. [Learn more](https://neon.com/docs/guides/branching-schema-only). A branch is isolated from its originating data, so you are free to play around with it, modify it, or delete it when it's no longer needed. Changes to a branch are independent. A branch and its parent can share the same data but diverge at the point of branch creation. Writes to a branch are saved as a delta. Creating a branch does not increase load on the parent branch or affect it in any way, which means you can create a branch without impacting the performance of your production database. Each Neon project is created with a [root branch](https://neon.com/docs/reference/glossary#root-branch) called `main`. The first branch that you create is branched from the project's root branch. Subsequent branches can be branched from the root branch or from a previously created branch. ## Branching workflows You can use Neon's branching feature in variety workflows. ### Development You can create a branch of your production database that developers are free to play with and modify. By default, branches are created with all of the data that existed in the parent branch, eliminating the setup time required to deploy and maintain a development database. The following video demonstrates creating a branch in the Neon Console. For step-by-step instructions, see [Create a branch](https://neon.com/docs/manage/branches#create-a-branch). You can integrate branching into your development workflows and toolchains using the Neon CLI, API, or GitHub Actions. If you use Vercel, you can use the [Neon-managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration) to create a branch for each preview deployment. Refer to the following guides for instructions: - [Branching with the Neon API](https://neon.com/docs/guides/branching-neon-api): Learn how to instantly create and manage branches with the Neon API - [Branching with the Neon CLI](https://neon.com/docs/guides/branching-neon-cli): Learn how to instantly create and manage branches with the Neon CLI - [Branching with GitHub Actions](https://neon.com/docs/guides/branching-github-actions): Automate branching with Neon's GitHub Actions for branching - [The Neon-Managed Vercel Integration](https://neon.com/docs/guides/neon-managed-vercel-integration): Connect your Vercel project and create a branch for each preview deployment ### Testing Testers can create branches for testing schema changes, validating new queries, or testing potentially destructive queries before deploying them to production. A branch is isolated from its parent branch but has all of the parent branch's data up to the point of branch creation, which eliminates the effort involved in hydrating a database. Tests can also run on separate branches in parallel, with each branch having dedicated compute resources. Refer to the following guide for instructions. - [Branching — Testing queries](https://neon.com/docs/guides/branching-test-queries): Instantly create a branch to test queries before running them in production ### Temporary environments Create branches with TTL by [setting an expiration date](https://neon.com/docs/guides/branch-expiration). Perfect for temporary development and testing environments that need automatic deletion. Branches with expiration are particularly useful for: - CI/CD pipeline testing environments - Feature development with known lifespans - Automated testing scenarios - AI-driven development workflows ## Restore and recover data If you lose data due to an unintended deletion or some other event, you can restore a branch to any point in its restore window to recover lost data. You can also create a new restore branch for historical analysis or any other reason. ### Restore window Your **restore window** determines how far back Neon maintains a history of changes for each branch. By default, this is set to **1 day** to help you avoid unexpected storage costs. You can increase it up to: - Up to 6 hours (or 1 GB) on the Free plan - Up to 7 days on Launch - Up to 30 days on Enterprise You can configure your restore window in the Neon Console under **Settings** > **Storage** > **Instant restore**. See [Configure restore window](https://neon.com/docs/manage/projects#configure-your-restore-window). **Note**: Increasing your restore window affects **all branches** in your project and increases project storage. You can reduce it to zero to minimize cost. History is retained in the form of Write-Ahead-Log (WAL) records. As WAL records age out of the retention period, they are evicted from storage and no longer count toward project storage. Learn how to use these data recovery features: - [Instant restore](https://neon.com/docs/guides/branch-restore): Restore a branch to an earlier point in its history - [Reset from parent](https://neon.com/docs/guides/reset-from-parent): Reset a branch to match its parent - [Time Travel queries](https://neon.com/docs/guides/time-travel-assist): Run SQL queries against your database's past state --- # Source: https://neon.com/llms/introduction-compute-lifecycle.txt # Compute lifecycle > The "Compute Lifecycle" document outlines the stages of compute resources in Neon, detailing how they are created, managed, and terminated within the platform's infrastructure. ## Source - [Compute lifecycle HTML](https://neon.com/docs/introduction/compute-lifecycle): The original HTML version of this documentation A compute in Neon is a stateless Postgres process due to the separation of storage and compute. It has two main states: `Idle` and `Active`. Generally, an idle compute has been suspended by Neon's scale to zero feature due to inactivity, while an `Active` compute has been activated by a connection or operation, indicating that Postgres is currently running. ## Scale to zero If there are no active queries for 5 minutes, which is the scale to zero setting in Neon, your compute is automatically placed into an idle state. If you are on a paid plan, you can disable the scale to zero behavior so that a compute always remains active. This behavior is controlled by your compute's **Scale to zero** setting. For information about configuring this setting, see [Edit a compute](https://neon.com/docs/manage/computes#edit-a-compute). **Note**: Neon's _Scale to Zero_ feature is conservative. It treats an "idle-in-transaction" connection as active to avoid breaking application logic that involves long-running transactions. Only the truly inactive connections are closed after the defined period of inactivity. ## Compute activation When you connect to an idle compute, Neon automatically activates it. Activation generally takes a few hundred milliseconds. However, if your Neon project has been idle for more than 7 days, you may experience a slightly longer activation time. Considering this activation time, your first connection may have a slightly higher latency than subsequent connections to an already-active compute. Also, Postgres memory buffers are cold after a compute wakes up from the idle state, which means that initial queries may take longer until the memory buffers are warmed. After a period of time in the idle state, Neon occasionally activates your compute to check for data availability. The time between checks gradually increases if the compute does not receive any client connections over an extended period. In the **Branches** widget on your **Project Dashboard**, you can check if a compute is active or idle and watch as it transitions from one state to another. ## Session context considerations When connections are closed due to a compute being suspended, anything that exists within a session context is forgotten and must be recreated before being used again. For example, Postgres parameters set for a specific session, in-memory statistics, temporary tables, prepared statements, advisory locks, and notifications and listeners defined using `NOTIFY/LISTEN` commands only exist for the duration of the current session and are lost when the session ends. For more, see [Session context](https://neon.com/docs/reference/compatibility#session-context). --- # Source: https://neon.com/llms/introduction-cost-optimization.txt # Cost optimization > The "Cost Optimization" document outlines strategies and configurations for Neon users to efficiently manage and reduce expenses associated with database operations. ## Source - [Cost optimization HTML](https://neon.com/docs/introduction/cost-optimization): The original HTML version of this documentation Managing your Neon costs effectively requires understanding how each billing factor works and implementing strategies to control usage. This guide provides actionable recommendations for optimizing costs across all billing metrics. ## ☑ Compute (CU-hours) Compute is typically the largest component of your Neon bill. You're charged based on compute size (in CUs) multiplied by the hours your compute is running. **Optimization strategies:** - **Right-size your compute** — Start by determining the appropriate compute size for your workload. Your compute should be large enough to cache your frequently accessed data (your working set) in memory. A compute that's too small can lead to poor query performance, while an oversized compute wastes resources. See [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute) for guidance. - **Use autoscaling effectively** — Configure [autoscaling](https://neon.com/docs/introduction/autoscaling) to dynamically adjust compute resources based on demand. Set your minimum size to handle your baseline workload and your maximum to accommodate peak traffic. You only pay for what you use. See [Enable autoscaling](https://neon.com/docs/guides/autoscaling-guide) for configuration steps. - **Enable scale to zero** — For non-production environments or databases with intermittent usage, enable [scale to zero](https://neon.com/docs/introduction/scale-to-zero) to suspend your compute after 5 minutes of inactivity. This can dramatically reduce compute costs for development, testing, and preview environments. See [Configuring scale to zero](https://neon.com/docs/guides/scale-to-zero-guide). - **Manage persistent connections and scheduled jobs** — Applications that maintain long-lived connections or scheduled jobs (like cron tasks) can prevent your compute from scaling to zero, keeping it active 24/7. If these aren't critical, consider closing idle connections or adjusting job schedules to allow scale to zero during off-peak hours. - **Be aware of logical replication impact** — If you're using [logical replication](https://neon.com/docs/guides/logical-replication-neon), note that computes with active replication subscribers will not scale to zero, resulting in 24/7 compute usage. Plan accordingly and consider whether logical replication is necessary for all environments. ## ☑ Storage (root and child branches) Storage costs are based on actual data size for root branches and the minimum of accumulated changes or logical data size for child branches, billed in GB-months. **Optimization strategies:** - **Manage child branch storage** — Child branches are billed for the minimum of accumulated data changes or your logical data size—capped at your actual data size. While this prevents charges from exceeding your data size, managing branches effectively still helps minimize costs: - Set a [time to live](https://neon.com/docs/guides/branch-expiration) on development and preview branches - Delete child branches when they're no longer needed - For production workloads, use a [root branch](https://neon.com/docs/manage/branches#root-branch) instead—root branches are billed on your actual data size. - **Implement branch lifecycle management** — Review your branches regularly and delete any that are no longer needed. Keeping your branch count under control reduces both storage costs and potential [extra branch charges](https://neon.com/docs/introduction/plans#extra-branches). ## ☑ Instant restore storage Instant restore storage is based on the amount of change history (WAL records) retained, not the number of restores performed. **Optimization strategies:** - **Adjust your restore window** — By default, Neon retains history for 6 hours on Free plan projects and 1 day on paid plan projects. You can increase this up to the maximum for your plan (6 hours for Free, 7 days for Launch, 30 days for Scale). If you don't need much recovery capability, you can reduce your restore window to lower costs. Find the right balance between restore capability and cost. See [Configure your restore window](https://neon.com/docs/manage/projects#configure-your-restore-window). - **Understand the trade-offs** — Reducing your restore window decreases instant restore storage costs but limits how far back you can restore data. Consider your actual recovery requirements and set the window accordingly. ## ☑ Extra branches Extra branches beyond your plan's allowance are billed at $1.50/branch-month, prorated hourly. Plans include 10 branches for Free and Launch, 25 for Scale. **Optimization strategies:** - **Use branch expiration** — Set automatic deletion timestamps on temporary branches using [branch expiration](https://neon.com/docs/guides/branch-expiration) to ensure they're cleaned up when no longer needed. - **Automate cleanup** — Consider implementing automated cleanup scripts using the [Neon API](https://neon.com/docs/manage/branches#branching-with-the-neon-api) or [Neon CLI](https://neon.com/docs/guides/branching-neon-cli) to stay within your plan's branch allowance. ## ☑ Public data transfer Public network transfer (egress) is the data sent from your databases over the public internet. Free plans include 5 GB/month, while paid plans include 100 GB/month, then $0.10/GB. **Optimization strategies:** - **Monitor your data transfer** — Be aware of how much data you're transferring out of Neon. This includes: - Data sent to client applications - [Logical replication](https://neon.com/docs/reference/glossary#logical-replication) to any destination, including other Neon databases - **Review your bill** — If you see unexpectedly high public data transfer charges, [contact support](https://neon.com/docs/introduction/support) for assistance. Neon does not currently expose detailed data transfer metrics in the Console. --- # Source: https://neon.com/llms/introduction-early-access.txt # Join the Early Access Program > The document outlines the process for joining Neon's Early Access Program, detailing the steps and requirements for users to participate and gain early access to new features and updates. ## Source - [Join the Early Access Program HTML](https://neon.com/docs/introduction/early-access): The original HTML version of this documentation Sign up for the **Early Access Program** and get: - **Exclusive early access:** Get a first look at upcoming features before they go live. - **Private community:** Gain access to a dedicated Discord channel to connect with the Neon team and provide feedback to help shape what comes next. - **Weekly insights:** Receive updates on Neon's latest developments and future plans. The Early Access Program is available at two levels, which you can enable independently: ## Personal Early Access Enable Early Access for your personal account to preview new features. Early Access features under this program level will only apply to projects under your personal account. [Sign up now](https://console.neon.tech/app/settings/early-access) to get started! ## Organization Early Access Enable Early Access for your organization to preview new features. When an organization admin enables Early Access, everyone in your organization gets access to preview features across all projects belonging to that organization. To enable Early Access for your organization, go to your organization **Settings** page in the Neon Console and **Join early access**. ## Opting Out If you need to opt out of Early Access later, [contact our Support team](https://console.neon.tech/app/projects?modal=support) about Early Access changes to ensure there's no disruption to features you may be using. --- # Source: https://neon.com/llms/introduction-enterprise-sales-process.txt # Neon Enterprise Sales Process > The Neon Enterprise Sales Process document outlines the structured approach and key steps involved in managing and executing enterprise sales for Neon, focusing on client engagement, negotiation, and closing deals. ## Source - [Neon Enterprise Sales Process HTML](https://neon.com/docs/introduction/enterprise-sales-process): The original HTML version of this documentation Our goal at Neon is to make the sales process as easy and efficient as possible. Below, we've outlined our typical process so you'll know what to expect when you contact us. Depending on your needs, additional discussions and consultations may be required. We'll be happy to arrange those as needed. ## Reach out to us Start by filling out our [contact form](https://neon.com/contact-sales) to let us know how we can help. If you're looking for specific plans or pricing, sharing details about your feature requirements and workload will get the ball rolling immediately and speed up the overall process. ## Information gathering After contacting us, we'll email you to gather more information about your requirements. We'll ask you for information about: - Your current database environment, such as the number of databases, regions, your application stack, and integrations with other tools, platforms, and services. - Your workload, feature, and performance requirements. - Security and compliance requirements. - Your desired outcome and timeline. We may also ask you to run our [pg-prechecks script](https://github.com/neondatabase-labs/pg-prechecks) to gather details about your Postgres server and send the results back to us. These details help us understand your needs and prepare pricing and migration proposals. **Note** about pg-prechecks: The `pg-prechecks` script provides a summary of a Postgres database server's status and configuration. It's a modified version of `pt-pg-summary`, which is part of the Percona Toolkit. To learn more about this tool, refer to the [pg-prechecks README](https://github.com/neondatabase-labs/pg-prechecks?tab=readme-ov-file#pg-prechecks). ## Call with the Neon Solutions team If an Enterprise plan is a good fit for your use case, the Neon Solutions team will set up a call to discuss: - Configuration options and timelines - Add-ons like support packages or custom requirements - Any other questions you might have If you're interested in learning more about specific features, we can provide a demo or schedule a follow-up call. _The Neon solutions team is made up of experienced technical staff who have worked through many complex migrations with some of our largest customers._ ## Pricing and migration proposal We'll create a pricing proposal based on our discussions, often including a proof-of-concept migration plan. The proposal will be tailored according to: - Information from our initial conversation - Your environment details - Your workload, feature, and performance requirements We'll send the pricing proposal and migration plan and arrange a follow-up call to discuss them, answer any questions you may have, and set out timelines should you decide to move forward. ## Additional details For complex setups, we may request additional information, such as: - Specifics about your current environment - Usage and billing details from your current provider ## Stakeholder support At any time during the process, we'll be happy to support your security team or other stakeholders by: - Answering security-related questions - Providing documentation - Participating in security and compliance reviews as needed We've laid out our typical process above, but we're flexible and ready to adjust the process to fit your specific requirements. [Contact us](https://neon.com/contact-sales) to get started. --- # Source: https://neon.com/llms/introduction-high-availability.txt # High Availability (HA) in Neon > The "High Availability (HA) in Neon" documentation outlines the architecture and mechanisms Neon employs to ensure continuous database operation and fault tolerance, detailing its use of replication and failover strategies. ## Source - [High Availability (HA) in Neon HTML](https://neon.com/docs/introduction/high-availability): The original HTML version of this documentation At Neon, our serverless architecture takes a different approach to high availability. Instead of maintaining idle standby compute replicas, we achieve multi-AZ resilience through our separation of storage and compute. Based on this separation, we can break our approach into two main parts: - **Storage redundancy** — _Protecting both your long-term and active data_ On the storage side, all data is backed by cloud object storage for long-term safety, while Pageserver and Safekeeper services are distributed across [Availability Zones](https://en.wikipedia.org/wiki/Availability_zone) to provide redundancy for the cached data used by compute. - **Compute resiliency** — _Keeping your application running_ Our architecture scales to handle traffic spikes and restarts or reschedules compute instances when issues occur, with recovery times typically ranging from a few seconds to a few minutes. While this means your application needs to handle brief disconnections, it provides cost efficiency by eliminating the need for continuously running standby compute instances. ## Storage redundancy By distributing storage components across multiple Availability Zones (AZs), Neon ensures both data durability and continuous data access. ### General storage architecture This diagram shows how Neon handles Safekeeper or Pageserver service recovery across Availability Zones: In this architecture: - **Safekeepers replicate data across AZs** Safekeepers are distributed across multiple Availability Zones (AZs) to handle **Write-Ahead Log (WAL) replication**. WAL is replicated across these multi-AZ Safekeepers, ensuring your data is safe if any particular Safekeeper fails. - **Pageservers** Pageservers act as a disk cache, ingesting and indexing data from the WAL stored by Safekeepers and serving that data to your compute. To ensure high availability, Neon employs secondary Pageservers that maintain up-to-date copies of project data. In the event of a Pageserver failure, impacted projects are immediately reassigned to a secondary Pageserver, with minimal downtime. The system continuously monitors Pageserver health using a heartbeat mechanism to ensure timely detection and failover. - **Object storage** Your data's primary, long-term storage is in **cloud object storage**, with **99.999999999%** durability, protecting against data loss regardless of Pageserver or Safekeeper status. #### Recap of storage recovery times Here's a summary of how different storage components handle and recover from failures: | Component | Failure impact | Recovery mechanism | Recovery time | | -------------- | ---------------------------------------------- | ------------------------------- | ------------- | | Safekeeper | WAL writes continue to other Safekeepers | Redundancy is built-in | Immediate | | Pageserver | Read requests automatically route to secondary | Automatic failover to secondary | Seconds | | Object storage | No impact - 99.999999999% durability | Multi-AZ redundancy built-in | Immediate | ## Compute failover Our serverless architecture manages compute failures through rapid recovery and automatic traffic redirection, without the need to maintain idle standby replicas. Because compute instances are stateless, failures don't affect your data, and your connection string remains unchanged. The system typically resolves issues within seconds to minutes, depending on the type of failure. However, your application should be configured to handle brief disconnections and reconnect automatically. ### Compute endpoints are ephemeral Your compute endpoint exists essentially as metadata — with your connection string being the core element. This design means endpoints can be instantly reassigned to new compute resources without changing your application's configuration. When you first connect, Neon assigns your endpoint to an available VM from our ready-to-use pool, eliminating traditional provisioning delays. ### Postgres failure Postgres runs inside the VM. If Postgres crashes, an internal Neon process detects the issue and automatically restarts Postgres. This recovery process typically completes within a few seconds. ### VM failure In rarer cases, the VM itself may fail due to issues like a kernel panic or the host's termination. When this happens, Neon recreates the VM and reattaches your compute endpoint. This process may take a little longer than restarting Postgres, but it still typically resolves in seconds. ### Unresponsive endpoints If a compute endpoint becomes unhealthy or unresponsive, we will automatically detect and reattach it to a new compute after 5 minutes. Your application may experience connectivity issues until the endpoint is restored. ### Node failures Kubernetes nodes are the underlying infrastructure hosting multiple compute instances. When a node becomes unavailable, Neon automatically reschedules compute instances to other healthy nodes, a process that typically takes 1-2 minutes. While your data remains safe during this process, compute availability will be temporarily affected until rescheduling is complete. ### Availability Zone failures Availability Zones are physically separate data centers within a cloud region. When an AZ becomes unavailable, compute instances in that AZ will be automatically rescheduled to healthy AZs. Recovery time typically takes 1-10 minutes, depending on node availability in the destination AZs. Your connection string remains stable, and new connections will be routed to the recovered instance. Multi-AZ support is available in all regions for recovery purposes. While compute instances run in a single AZ at any given time, storage components are continuously distributed across multiple AZs, and compute can be automatically rescheduled to other AZs if needed. ### Recap of failover times Here's a summary of how different types of compute failures are handled and their expected recovery times: | Failure type | Impact | Recovery mechanism | Recovery time | | ------------------------- | ---------------------------------- | --------------------------------------- | ------------- | | Postgres crash | Brief interruption | Automatic restart | Seconds | | VM failure | Brief interruption | VM recreation and endpoint reattachment | Seconds | | Unresponsive endpoint | Intermittent connectivity | Automatic recovery initiation | 5 minutes | | Node failure | Compute unavailable | Rescheduling to healthy nodes | 1-2 minutes | | Availability Zone failure | Compute unavailable in affected AZ | Rescheduling to healthy AZs | 1-10 minutes | ### Impact on session data after failover? While your application should handle reconnections automatically, session-specific data like temporary tables, prepared statements, and the Local File Cache ([LFC](https://neon.com/docs/reference/glossary#local-file-cache)), which stores frequently accessed data, will not persist across a failover. As a result, queries may initially run more slowly until the Postgres memory buffers and cache are rebuilt. For details on uptime and performance guarantees, refer to our available [SLAs](https://neon.com/docs/introduction/support#slas). ## Limitations _No cross-region replication._ Neon's HA architecture is designed to mitigate failures within a single region by replicating data across multiple AZs. However, we currently do not support real-time replication across different cloud regions. In the event of a region-wide outage, your data is not automatically replicated to another region, and availability depends on the cloud provider restoring service to the affected region. --- # Source: https://neon.com/llms/introduction-ip-allow.txt # IP Allow > The "IP Allow" documentation for Neon outlines the process for configuring IP allowlists to manage access to Neon databases, detailing steps for adding, editing, and removing IP addresses to control database security. ## Source - [IP Allow HTML](https://neon.com/docs/introduction/ip-allow): The original HTML version of this documentation Neon's IP Allow feature, available with the Neon [Scale](https://neon.com/docs/introduction/plans) plan, ensures that only trusted IP addresses can connect to the project where your database resides, preventing unauthorized access and helping maintain overall data security. You can limit access to individual IP addresses, IP ranges, or IP addresses and ranges defined with [CIDR notation](https://neon.com/docs/reference/glossary#cidr-notation). You can configure **IP Allow** in your Neon project's settings. To get started, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## IP Allow together with Protected Branches You can apply IP restrictions more precisely by designating specific branches in your Neon project as protected and enabling the **Restrict IP access to protected branches only** option. This will apply your IP allowlist to protected branches only with no IP restrictions on other branches in your project. Typically, the protected branches feature is used with branches that contain production or sensitive data. For step-by-step instructions, refer to our [Protected Branches guide](https://neon.com/docs/guides/protected-branches). **Tip**: If you are an AWS user, Neon also supports a **Private Networking** feature, which enables connections to your Neon databases via AWS PrivateLink, bypassing the open internet entirely. See [Private Networking](https://neon.com/docs/guides/neon-private-networking). --- # Source: https://neon.com/llms/introduction-legacy-plans.txt # Neon legacy plans > The "Neon legacy plans" documentation outlines the features and limitations of Neon's legacy subscription plans, detailing the differences from current offerings and guiding users on transitioning to updated plans. ## Source - [Neon legacy plans HTML](https://neon.com/docs/introduction/legacy-plans): The original HTML version of this documentation This page describes Neon's **legacy plans**. These plans are no longer offered to new signups. If you're on a legacy paid plan, you can stay on it, but once you [switch to a new plan](https://neon.com/docs/introduction/manage-billing#change-your-plan), you cannot switch back. **Important**: You cannot upgrade or downgrade to a legacy plan. See our [current usage-based plans](https://neon.com/docs/introduction/plans) for options. If you signed up through **Azure Marketplace**, you remain on a [legacy plan](https://neon.com/docs/introduction/legacy-plans) — for both Free and paid. --- ## How to check if you're on a legacy plan To see if you are on a Neon legacy plan, navigate to your **Billing** page in the Neon Console and click **Change Plan**. If you're on a legacy plan, you'll see a **Legacy Plan** badge next to the name of your current plan. --- ## Free plan (Legacy) The legacy Free plan is best suited for hobby projects, prototypes, and learning Neon. Users on this plan will be automatically migrated to the new Free plan. ### Included allowances | Usage type | Plan allowance | | :------------------------- | :----------------------------------------------------------------------- | | **Projects** | 20 projects | | **Branches** | 10 branches per project | | **Databases** | 500 per branch | | **Storage** | 0.5 GB-month (regular and archive storage combined) | | **Compute** | 191.9 compute hours/month (enough to run a primary 0.25 CU compute 24/7) | | **Data transfer (Egress)** | 5 GB per month | **Tip** What is a compute hour?: - A compute hour is one _active hour_ for a compute with 1 vCPU. - For example, a 0.25 vCPU compute uses 1 compute hour every 4 active hours. - Formula: `compute hours = compute size × active hours`. Idle (suspended) time does not count as active time. ### Features - Autoscaling up to 2 vCPU - Scale to zero - Monitoring (1-day history) - All supported regions - Project collaboration - Read replicas (up to 3 per project) - Advanced Postgres features (logical replication, connection pooling, 60+ extensions) - Neon features like branching, time travel connections, and **Instant Restore (24-hour window)** - [Community support](https://neon.com/docs/introduction/support) --- ## Launch Plan (Legacy) Ideal for early-stage projects and startups preparing for growth. ### Included allowances | Usage type | Plan allowance | | ------------------- | --------------------------------- | | **Projects** | 100 Neon projects | | **Branches** | 5000 per project | | **Databases** | 500 per branch | | **Storage** | 10 GB-month | | **Archive Storage** | 50 GB-month | | **Compute** | 300 compute hours per month total | ### Extra usage | Extra usage type | Cost | | ------------------------- | ---------------------- | | **Extra Storage** | $1.75 per GB-month | | **Extra Archive Storage** | $0.10 per GB-month | | **Extra Compute** | $0.16 per compute hour | ### Features - Autoscaling up to 4 vCPUs / 16 GB RAM - Scale to zero - Monitoring (7-day history) - Branch protection (up to 2 branches) - Same advanced Postgres and Neon features as Free - Instant Restore (up to 7 days) - [Standard support](https://neon.com/docs/introduction/support) --- ## Scale Plan (Legacy) Designed for teams scaling production workloads and needing higher resource limits. ### Included allowances | Usage type | Plan allowance | | ------------------- | --------------------------------- | | **Projects** | 1000 Neon projects | | **Branches** | 5000 per project | | **Databases** | 500 per branch | | **Storage** | 50 GB-month | | **Archive Storage** | 250 GB-month | | **Compute** | 750 compute hours per month total | ### Extra usage | Extra usage type | Cost | | ------------------------- | ---------------------- | | **Extra Storage** | $1.50 per GB-month | | **Extra Archive Storage** | $0.10 per GB-month | | **Extra Compute** | $0.16 per compute hour | | **Extra Projects** | $50 per 1000 projects | ### Features - Autoscaling up to 8 vCPUs / 32 GB RAM - Scale to zero - Monitoring (14-day history) - Branch protection (up to 5 branches) - Customer-provided custom extensions (on AWS only) - Instant Restore (up to 14 days) - [Standard support](https://neon.com/docs/introduction/support) --- ## Business Plan (Legacy) A high-capacity plan for production teams with security and compliance requirements. ### Included allowances | Usage type | Plan allowance | | ------------------- | ---------------------------------- | | **Projects** | 5000 Neon projects | | **Branches** | 5000 per project | | **Databases** | 500 per branch | | **Storage** | 500 GB-month | | **Archive Storage** | 2500 GB-month | | **Compute** | 1000 compute hours per month total | ### Extra usage | Extra usage type | Cost | | ------------------------- | ---------------------- | | **Extra Storage** | $0.50 per GB-month | | **Extra Archive Storage** | $0.10 per GB-month | | **Extra Compute** | $0.16 per compute hour | | **Extra Projects** | $50 per 5000 projects | ### Features - Autoscaling up to 16 vCPUs / 56 GB RAM - Fixed compute sizes up to 56 vCPUs / 224 GB RAM - Scale to zero - Monitoring (14-day history) - SOC 2 compliance - HIPAA compliance (add-on) - IP allowlists and branch protection - Instant Restore (up to 30 days) - [Priority support](https://neon.com/docs/introduction/support) - [Service SLA – 99.95% uptime](https://neon.com/neon-business-sla) --- ## Enterprise Plan (Legacy) Custom-tailored for large teams, SaaS vendors, and fleet-level deployments. ### Included allowances | Usage type | Plan allowance | | ------------------- | -------------- | | **Projects** | Custom | | **Branches** | Custom | | **Databases** | Custom | | **Storage** | Custom | | **Archive Storage** | Custom | | **Compute** | Custom | ### Enterprise features - Custom pricing and resource limits - 0-downtime migrations - Scale to zero - HIPAA and SOC 2 compliance (add-ons) - Dedicated solution engineer - Custom domain proxy - Security reviews and compliance questionnaires - Invoice billing and annual commitments - [Enterprise support](https://neon.com/docs/introduction/support#enterprise-support) To explore an Enterprise plan, [contact sales](https://neon.tech/contact-sales) or [request a trial](https://neon.tech/enterprise#request-trial). ## Extra usage Neon legacy plans include monthly **allowances** for storage, compute, and projects. If you're on a paid plan and exceed those allowances, you're automatically billed for extra usage—no manual action required. The types of extra usage available vary by plan. If your usage exceeds a plan allowance and that type of extra usage is supported, it's automatically allocated and billed on your monthly invoice. #### Extra storage For example, the Launch plan includes 10 GB of storage. If you use more than that, you're charged $1.75 per additional GB-month. The same logic applies to Scale and Business, with lower rates at higher plan tiers. **Note**: In billing, "allocation" refers to a billable increase in your storage allowance—not physical provisioning of space. #### Extra projects Extra projects are only available on the Scale and Business plans: - **Scale**: Extra projects are allocated in units of **1000** at **$50 per unit** - **Business**: Extra projects are allocated in units of **5000** at **$50 per unit** If you exceed your project limit, you're billed for the next unit of extra projects, prorated from the date the extra usage began. For example, using 1001 projects on Scale results in one extra unit (1000 projects) billed at a prorated amount. **Note** How extra project charges are prorated: Cost = Units × (Unit Price ÷ Days in Month) × Days Left in Month Once a unit is allocated, you're billed for it through the end of the month. If your usage drops back below the limit, the extra charge is removed at the start of the next billing cycle. #### Extra compute Extra compute usage is billed by the **compute hour** at **$0.16/hour** across all paid plans. For example, if you're on the Launch plan and use 100 compute hours beyond your 300-hour allowance, you'll be billed an additional **$16**. Since compute usage is measured hourly, **prorated billing does not apply**. ## Legacy plan metrics This section describes [Storage](https://neon.com/docs/introduction/legacy-plans#storage), [Archive storage](https://neon.com/docs/introduction/legacy-plans#archive-storage), [Compute](https://neon.com/docs/introduction/legacy-plans#compute), [Data transfer](https://neon.com/docs/introduction/legacy-plans#data-transfer) and [Project](https://neon.com/docs/introduction/legacy-plans#projects) usage metrics for Neon's legacy plans. ### Storage Neon's storage engine is designed to support a serverless architecture and enable features such as [instant restore](https://neon.com/docs/introduction/branch-restore), [time travel](https://neon.com/docs/guides/time-travel-assist), and [branching](https://neon.com/docs/guides/branching-intro). For this reason, storage in Neon differs somewhat from other database services. In Neon, storage consists of your total **data size** and **history**. - **Data size** This component of Neon storage is similar to what you might expect from most database services — it's simply the size of your data across all of your Neon projects and branches. You can think of it as a snapshot of your data. - **History** This aspect of Neon storage is unique: "History" is a log of changes (inserts, updates, and deletes) to your data over time in the form of Write-Ahead Log (WAL) records. History enables the instant restore, time travel, and branching features mentioned above. The size of your history depends on a couple of factors: - **The volume of changes to your data** — the volume of inserts, updates, and deletes. For example, a write-heavy workload will generate more history than a read-heavy workload. - **How much history you keep** — referred to as [restore window](https://neon.com/docs/introduction/branching#restore-window), which can be an hour, a day, a week, or even a month. The restore window is configurable for each Neon project. As you might imagine, 1 day of history would generally require less storage than 30 days of history, but less history limits the features that depend on it. For example, 1 day of history means that your maximum instant restore point is only 1 day in the past. #### How branching affects storage If you use Neon's branching feature, you should be aware that it can also affect storage. Here are some rules of thumb when it comes to branching: 1. **Creating a branch does not add to storage immediately.** At creation time, a branch is a copy-on-write clone of its parent branch and shares its parent's data. Shared data is not counted more than once. 2. **A branch shares data with its parent if it's within the restore window.** For example, a Neon project has 7-day restore window, a child branch shares data with its parent branch for 7 days. However, as soon as the child branch ages out of that window, data is no longer shared — the child branch's data stands on its own. 3. **Making changes to a branch adds to storage.** Data changes on a branch are unique to that branch and counted toward storage. For example, an insert operation on the branch adds a record to the branch's history. 4. **Branches older than 14 days and not accessed in the past 24-hours are automatically moved to cost-efficient [Archive storage](https://neon.com/docs/introduction/legacy-plans#archive-storage)**. The **Storage** and **Archive storage** amounts you see under **Usage** on the **Billing** page in the Neon Console takes all of these factors into account. **Note**: Each Neon plan comes with an allowance of **Storage** and **Archive storage** that's included in your plan's monthly fee. See [Neon plans](https://neon.com/docs/introduction/plans). To learn how extra storage is allocated and billed, see [Extra usage](https://neon.com/docs/introduction/extra-usage). #### Storage FAQs Details: **Do branches add to storage?** When branches are created, they initially do not add to storage since they share data with the parent branch. However, as soon as changes are made to a branch, new WAL records are created, adding to your history. Additionally, when a branch ages out of your project's restore window, its data is no longer shared with its parent and is counted independently, thus adding to storage. To avoid branches consuming storage unnecessarily, [reset](https://neon.com/docs/guides/reset-from-parent) branches to restart the clock or [delete](https://neon.com/docs/manage/branches) them before they age out of the restore window. Details: **Does a delete operation add to storage?** Yes. Any data-modifying operation, such as deleting a row from a table in your database, generates a WAL record, so even deletions temporarily increase your history size until those records age out of your restore window. Details: **What increases the size of history?** Any data-modifying operation increases the size of your history. As WAL records age out of your [restore window](https://neon.com/docs/introduction/branching#restore-window), they are removed, reducing your history and potentially decreasing your total storage size. Details: **What can I do to minimize my storage?** Here are some strategies to consider: - **Optimize your restore window** Your restore window setting controls how much change history your project retains. Decreasing history reduces the window available for things like instant restore or time-travel. Retaining no history at all would make branches expensive, as a branch can only share data with its parent if history is retained. Your goal should be a balanced restore window configuration; one that supports the features you need but does not consume too much storage. See [Restore window](https://neon.com/docs/introduction/branching#restore-window) for how to configure your restore window. - **Use branches instead of duplicating data** Use short-lived Neon branches for things like testing, previews, and feature development instead of creating separate standalone databases. As long as your branch remains within the restore window, it shares data with its parent, making branches very storage-efficient. Added to that, branches can be created instantly, and they let you work with data that mirrors production. - **Consider the impact of deletions** It may seem counterintuitive, but deleting rows from a table temporarily increases storage because delete operations are logged as part of your change history. The records for those deletions remain part of your history until they age out of your retention window. For mass deletions, `DELETE TABLE` and `TRUNCATE TABLE` operations are more storage-efficient since they log a single operation rather than a record for each deleted row. - **Delete or reset branches before they age out** [Delete](https://neon.com/docs/manage/branches) old branches or [reset](https://neon.com/docs/guides/reset-from-parent) them before they age out of the restore window. Deleting branches before they age out avoids potentially large increases in storage. Resetting a branch sets the clock back to zero for that branch. Details: **What happens when I reach my storage limit?** Your storage limit varies depending on your Neon plan. - **Free plan**: If you reach your storage limit on the Free plan (0.5 GB-month), any further database operations that would increase storage (inserts, updates, and deletes) will fail, and you will receive an error message. - **Launch, Scale, and Business Plans**: For users on a paid plan (Launch, Scale, or Business), exceeding your storage limit will result in [extra usage](https://neon.com/docs/introduction/extra-usage). Details: **I have a small database. Why is my storage so large?** These factors could be contributing to your high storage consumption: - **Frequent data modifications:** If you are performing a lot of writes (inserts, updates, deletes), each operation generates WAL records, which are added to your history. For instance, rewriting your entire database daily can lead to a storage amount that is a multiple of your database size, depending on the number of days of history your Neon project retains. - **Restore window:** The length of your restore window plays a significant role. If you perform many data modifications daily and your restore window is set to 7 days, you will accumulate a 7-day history of those changes, which can increase your storage significantly. To mitigate this issue, consider adjusting your [restore window](https://neon.com/docs/introduction/branching#restore-window) setting. Perhaps you can do with a shorter window for instant restore, for example. Retaining less history should reduce your future storage consumption. Also, make sure you don't have old branches lying around. If you created a bunch of branches and let those age out of your restore window, that could also explain why your storage is so large. Details: **How does running `VACUUM` or `VACUUM FULL` affect my storage costs?** If you're looking to control your storage costs, you might start by deleting old data from your tables, which reduces the data size you're billed for going forward. Since, in typical Postgres operations, deleted tuples are not physically removed until a vacuum is performed, you might then run `VACUUM`, expecting to see a further reduction in the `Data size` reported in the Console — but you don't see the expected decrease. **Why no reduction?** In Postgres, [VACUUM](https://www.postgresql.org/docs/current/sql-vacuum.html) doesn't reduce your storage size. Instead, it marks the deleted space in the table for reuse, meaning future data can fill that space without increasing data size. While, `VACUUM` by itself won't make the data size smaller, it is good practice to run it periodically, and it does not impact availability of your data. ```sql VACUUM your_table_name; ``` **Use VACUUM FULL to reclaim space** Running `VACUUM FULL` _does_ reclaim physical storage space by rewriting the table, removing empty spaces, and shrinking the table size. This can help lower the **Data size** part of your storage costs. It's recommended to use `VACUUM FULL` when a table has accumulated a lot of unused space, which can happen after heavy updates or deletions. For smaller tables or less frequent updates, a regular `VACUUM` is usually enough. To reclaim space using `VACUUM FULL`, you can run the following command per table you want to vacuum: ```sql VACUUM FULL your_table_name; ``` However, there are some trade-offs: - **Table locking** — `VACUUM FULL` locks your table during the operation. If this is your production database, this may not be an option. - **Temporary storage spike** —The process creates a new table, temporarily increasing your storage. If the table is large, this could push you over your plan's storage allowance, triggering extra usage charges. On the Free plan, this might even cause the operation to fail if you hit the storage limit. In short, `VACUUM FULL` can help reduce your data size and future storage costs, but it can also result in temporary extra usage charges for the current billing period. **Recommendations** - **Set a reasonable history window** — We recommend setting your restore window to balance your data recovery needs and storage costs. Longer history means more data recovery options, but it consumes more storage. - **Use VACUUM FULL sparingly** — Because it locks tables and can temporarily increase storage costs, only run `VACUUM FULL` when there is a significant amount of space to be reclaimed and you're prepared for a temporary spike in storage consumption. - **Consider timing** — Running `VACUUM FULL` near the end of the month can help minimize the time that temporary storage spikes impact your bill, since charges are prorated. - **Manual VACUUM for scale to zero users** — In Neon, [autovacuum](https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM) is enabled by default. However, when your compute endpoint suspends due to inactivity, the database activity statistics that autovacuum relies on are lost. If your project uses [scale to zero](https://neon.com/docs/guides/scale-to-zero-guide#considerations), it's safer to run manual `VACUUM` operations regularly on frequently updated tables rather than relying on autovacuum. This helps avoid potential issues caused by the loss of statistics when your compute endpoint is suspended. To clean a single table named `playing_with_neon`, analyze it for the optimizer, and print a detailed vacuum activity report: ```sql VACUUM (VERBOSE, ANALYZE) playing_with_neon; ``` See [VACUUM and ANALYZE statistic](https://neon.com/docs/postgresql/query-reference#vacuum-and-analyze-statistics) for a query that shows the last time vacuum and analyze were run. Details: **What is the maximum data size that Neon supports?** Each [Neon plan](https://neon.com/docs/introduction/plans) comes with a specific storage allowance. Beyond this allowance on paid plans, extra usage costs apply. Billing-related allowances aside, Neon projects can support data sizes up to 4 TiB. To increase this limit, [contact the Neon Sales team](https://neon.com/contact-sales). ### Archive storage To minimize storage costs, Neon **automatically** archives branches that are **older than 14 days** and **have not been accessed for the past 24 hours**. Both conditions must be true for a branch to be archived. Additionally, these conditions apply: - A branch cannot be archived if it has an unarchived child branch. - A child branch must be archived before a parent branch can be archived. No action is required to unarchive a branch. It happens automatically. Connecting to an archived branch, querying it, or performing some other action that accesses it will trigger the unarchive process. It's important to note that when a branch is unarchived, its parent branches, all the way up to the root branch, are also unarchived. **Note**: Each Neon plan comes with an allowance of **Archive storage** that's included in your plan's monthly fee. See [Neon plans](https://neon.com/docs/introduction/plans). Extra archive storage is billed per GB-month. To learn how extra archive storage is allocated and billed, see [Extra usage](https://neon.com/docs/introduction/extra-usage). For more about how Neon automatically archives inactive branches, see [Branch archiving](https://neon.com/docs/guides/branch-archiving). To understand how archive storage is implemented in Neon's architecture, refer to [Archive storage](https://neon.com/docs/introduction/architecture-overview#archive-storage) in our architecture documentation. ### Compute Compute hour usage is calculated by multiplying compute size by _active hours_. **Tip** Compute Hours Formula: ``` compute hours = compute size * active hours ``` - A single **compute hour** is one _active hour_ for a compute with 1 vCPU. For a compute with .25 vCPU, it would take 4 _active hours_ to use 1 compute hour. On the other hand, if your compute has 4 vCPUs, it would only take 15 minutes to use 1 compute hour. - An **active hour** is a measure of the amount of time a compute is active. The time your compute is idle when suspended due to inactivity is not counted. - **Compute size** is measured at regular intervals and averaged to calculate compute hour usage. Compute size in Neon is measured in _Compute Units (CUs)_. One CU has 1 vCPU and 4 GB of RAM. A Neon compute can have anywhere from .25 to 56 CUs, as outlined below: | Compute Unit | vCPU | RAM | | :----------- | :--- | :----- | | .25 | .25 | 1 GB | | .5 | .5 | 2 GB | | 1 | 1 | 4 GB | | 2 | 2 | 8 GB | | 3 | 3 | 12 GB | | 4 | 4 | 16 GB | | 5 | 5 | 20 GB | | 6 | 6 | 24 GB | | 7 | 7 | 28 GB | | 8 | 8 | 32 GB | | 9 | 9 | 36 GB | | 10 | 10 | 40 GB | | 11 | 11 | 44 GB | | 12 | 12 | 48 GB | | 13 | 13 | 52 GB | | 14 | 14 | 56 GB | | 15 | 15 | 60 GB | | 16 | 16 | 64 GB | | 18 | 18 | 72 GB | | 20 | 20 | 80 GB | | 22 | 22 | 88 GB | | 24 | 24 | 96 GB | | 26 | 26 | 104 GB | | 28 | 28 | 112 GB | | 30 | 30 | 120 GB | | 32 | 32 | 128 GB | | 34 | 34 | 136 GB | | 36 | 36 | 144 GB | | 38 | 38 | 152 GB | | 40 | 40 | 160 GB | | 42 | 42 | 168 GB | | 44 | 44 | 176 GB | | 46 | 46 | 184 GB | | 48 | 48 | 192 GB | | 50 | 50 | 200 GB | | 52 | 52 | 208 GB | | 54 | 54 | 216 GB | | 56 | 56 | 224 GB | - A connection from a client or application activates a compute. Activity on the connection keeps the compute in an `Active` state. A defined period of inactivity (5 minutes by default) places the compute into an idle state. #### How Neon compute features affect usage Compute-hour usage in Neon is affected by [scale to zero](https://neon.com/docs/guides/scale-to-zero-guide), [autoscaling](https://neon.com/docs/guides/autoscaling-guide), and your minimum and maximum [compute size](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration) configuration. With these features enabled, you can get a sense of how your compute hour usage might accrue in the following graph. You can see how compute size scales between your minimum and maximum CPU settings, increasing and decreasing compute usage: compute size never rises above your max level, and it never drops below your minimum setting. With scale to zero, no compute time at all accrues during inactive periods. For projects with inconsistent demand, this can save significant compute usage. **Note**: Neon uses a small amount of compute time, included in your billed compute hours, to perform periodic checks to ensure that your computes can start and read and write data. See [Availability Checker](https://neon.com/docs/reference/glossary#availability-checker) for more information. Availability checks take a few seconds are typically performed a few days apart. You can monitor these checks, how long they take, and how often they occur, on the **Systems operations** tab on the **Monitoring** page in the Neon Console. #### Estimate your compute hour usage To estimate what your compute hour usage might be per month: 1. Determine the compute size you require, in Compute Units (CUs). 1. Estimate the amount of _active hours_ per month for your compute(s). 1. Input the values into the compute hours formula: ```text compute hours = compute size * active hours ``` For example, this is a calculation for a 2 vCPU compute that is active for all hours in a month (approx. 730 hours): ```text 2 * 730 = 1460 compute hours ``` This calculation is useful when trying to select the right Neon plan or when estimating the extra compute usage you might need. **Note**: If you plan to use Neon's _Autoscaling_ feature, estimating **compute hours** is more challenging. Autoscaling adjusts the compute size based on demand within the defined minimum and maximum compute size thresholds. The best approach is to estimate an average compute size and modify the compute hours formula as follows: ```text compute hours = average compute size * active hours ``` To estimate an average compute size, start with a minimum compute size that can hold your data or working set (see [How to size your compute](https://neon.com/docs/manage/endpoints#how-to-size-your-compute)). Pick a maximum compute size that can handle your peak loads. Try estimating an average compute size between those thresholds based on your workload profile for a typical day. #### Compute FAQs Details: **What is a compute hour?** It's a metric for tracking compute usage. 1 compute hour is equal to 1 [active hour](https://neon.com/docs/introduction/legacy-plans#active-hours) for a compute with 1 vCPU. If you have a compute with .25 vCPU, as you would on the Neon Free plan, it would require 4 _active hours_ to use 1 compute hour. On the other hand, if you have a compute with 4 vCPU, it would only take 15 minutes to use 1 compute hour. To calculate compute hour usage, you would use the following formula: ``` compute hours = compute size * active hours ``` Details: **I used a lot of compute hours, but I don't use the compute that often. Where is the usage coming from?** If you're noticing an unexpectedly high number of compute hours, consider the following steps: - **Check your compute size:** Compute sizes range from 0.25 CU to 56 CUs. Larger compute sizes will consume more compute hours for the same active period. The formula for compute hour usage is: `compute hours = compute size * active hours`. If your application can operate effectively with a smaller compute size (less vCPU and RAM), you can reduce compute hours by configuring a smaller compute. See [Edit a compute](https://neon.com/docs/manage/endpoints#edit-a-compute) for instructions. - **Check for active applications or clients**: Some applications or clients might be polling or querying to your compute regularly, preventing it from scaling to zero. For instance, if you're replicating data from Neon to another service, that service may poll your compute endpoint at regular intervals to detect changes for replication. This behavior is often configurable. To investigate database activity, you can run the following query to check connections: ```sql SELECT client_addr, COUNT(*) AS connection_count, MAX(backend_start) AS last_connection_time FROM pg_stat_activity GROUP BY client_addr ORDER BY connection_count DESC; ``` This query displays the IP addresses connected to the database, the number of connections, and the most recent connection time. Details: **How many compute hours do I get with my plan?** Each of [Neon's plans](https://neon.com/docs/introduction/plans) includes a certain number of compute hours per month: - **Free plan**: This plan includes 191.9 compute hours per month, and you can use up to 5 of those compute hours with non-default branches, in case you want to use Neon's branching feature. Why 191.9? This is enough compute hours to provide 24/7 availability on a 0.25 vCPU compute (our smallest compute size) on your default branch. The math works like this: An average month has about 770 hours. A 0.25 vCPU compute uses 1/4 compute hours per hour, which works out to 180 compute hours per month if you run the 0.25 vCPU compute 24/7. The 11.9 additional compute hours per month are a little extra that we've added on top for good measure. You can enable autoscaling on the Free plan to allow your compute to scale up to 2 vCPU, but please be careful not to use up all of your 191.5 compute hours before the end of the month. - **Launch Plan**: This plan includes 300 compute hours (1,200 active hours on a 0.25 vCPU compute) total per month for all computes in all projects. Beyond 300 compute hours, you are billed for compute hours at $0.16 per hour. - **Scale Plan**: This plan includes 750 compute hours (3000 active hours on a 0.25 vCPU compute) total per month for all computes in all projects. Beyond 750 compute hours, you are billed an extra $0.16 per additional hour. - **Business Plan**: This plan includes 1000 compute hours (4000 active hours on a 0.24 vCPU compute) total per month for all computes in all projects. Beyond 1000 compute hours, you are billed an extra $0.16 per additional hour. Details: **Where can I monitor compute hour usage?** You can monitor compute hour usage for a Neon project on the [Project Dashboard](https://neon.com/docs/introduction/monitor-usage#project-dashboard). To monitor compute usage for your Neon account (all compute usage across all projects), refer to your **Billing** page. See [View usage metrics in the Neon Console](https://neon.com/docs/introduction/monitor-usage#view-usage-metrics-in-the-neon-console). Details: **What happens when I go over my plan's compute hour allowance?** On the Free plan, if you go over the 191.9 compute hour allowance, all computes are suspended until the beginning of the month. On our paid plans (Launch, Scale, and Business), you are billed automatically for any compute hours over your monthly allowance, which is 300 compute hours on Launch and 750 compute hours on Scale. The billing rate is $0.16 per compute hour. Details: **Can I purchase more compute hours?** On the Free plan, no. You'll have to upgrade to a paid plan. On the Launch, Scale, and Business plans, you are billed automatically for any compute hours over your monthly allowance: 300 compute hours on Launch, 750 compute hours on Scale, and 1000 hours on Business. The billing rate is $0.16 per compute hour. Details: **How does autoscaling affect my compute hour usage?** The formula for compute hour usage is: `compute hours = compute size * active hours`. You will use more compute hours when your compute scales up in size to meet demand. When you enable autoscaling, you define a max compute size, which acts as a limit on your maximum potential compute usage. See [Configuring autoscaling](https://neon.com/docs/introduction/autoscaling#configuring-autoscaling). Details: **How does compute size affect my compute hour usage?** The formula for compute hour usage is: `compute hours = compute size * active hours`. If you increase your compute size for more vCPU and RAM to improve performance, you will use more compute hours. Details: **How does scale to zero affect my compute hour usage?** Scale to zero places your compute into an idle state when it's not being used, which helps minimize compute hour usage. When enabled, computes are suspended after 5 minutes of inactivity. On Neon's paid plans, you can disable scale to zero. See [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero). ### Data Transfer Data transfer refers to the total volume of data transferred out of Neon (egress) during a billing period. Egress also includes data transferred from Neon via Postgres logical replication to any destination, including Neon itself. While Neon doesn’t charge for egress, Free plan projects are limited to 5 GB of data transfer per month. If a project exceeds this limit, its compute is suspended and the following error is shown: ```text Your project has exceeded the data transfer quota. Upgrade your plan to increase limits. ``` If you hit the data transfer limit on the Free plan, you can upgrade your plan from the **Billing** page in your Neon account. For details, see [Change your plan](https://neon.com/docs/introduction/manage-billing#change-your-plan). For paid plans, Neon applies a reasonable usage policy—there’s no fixed limit, but usage should remain within what’s typical for most workloads. If usage is unusually high, Neon may reach out to discuss your use case and plan options. You can monitor your data transfer usage on the **Project Dashboard** or **Billing** page. ### Projects In Neon, everything starts with a project. A project is a container for your branches, databases, roles, and other resources and settings. A project also defines the region your data and resources reside in. We typically recommend creating a project for each application or each client. In addition to organizing objects, projects are a way to track storage and compute usage by application or client. The following table outlines project allowances for each Neon plan. | Plan | Projects | | :--------- | :-------- | | Free plan | 1 | | Launch | 100 | | Scale | 1000 | | Business | 5000 | | Enterprise | Unlimited | - When you reach your limit on the Free plan or Launch plan, you cannot create additional projects. - Extra projects are available on the Enterprise plan. --- # Source: https://neon.com/llms/introduction-logical-replication.txt # Logical replication > The document outlines the process of setting up logical replication in Neon, detailing how to configure publications and subscriptions to replicate data changes between databases. ## Source - [Logical replication HTML](https://neon.com/docs/introduction/logical-replication): The original HTML version of this documentation Neon's logical replication feature, available to all Neon users, allows you to replicate data to and from your Neon Postgres database: - Perform live migrations to Neon from external sources such as AWS RDS and Google Cloud SQL — or any platform that runs Postgres. - Stream data from your Neon database to external destinations, enabling Change Data Capture (CDC) and real-time analytics. External sources might include data warehouses, analytical database services, real-time stream processing systems, messaging and event-streaming platforms, and external Postgres databases, among others. - Replicate data from one Neon project to another for Neon project, account, Postgres version, or region migration. Logical replication in Neon works in the same way as logical replication on a standard Postgres installation, using a publish and subscribe model to replicate data from the source database to the destination. To learn more, refer to our [Logical replication guide](https://neon.com/docs/guides/logical-replication-guide). --- # Source: https://neon.com/llms/introduction-manage-billing.txt # Manage billing > The "Manage billing" document outlines the procedures for Neon users to access, review, and update their billing information, including payment methods and billing history, within the Neon platform. ## Source - [Manage billing HTML](https://neon.com/docs/introduction/manage-billing): The original HTML version of this documentation What you will learn: - How to access the Billing page - How to update your billing information - How to download invoices - How to change plans - How to prevent further monthly charges - How to delete your account Related topics: - [Neon plans](https://neon.com/docs/introduction/plans) - [Monitoring billing and usage](https://neon.com/docs/introduction/monitor-usage) ## View the Billing page You can view and manage billing from the **Billing** page in the Neon Console. To access your **Billing** page: 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu to view the charges to date. On the **Billing** page, you will find a summary outlining current charges and the details of your plan, your payment information, and your monthly invoices. ## Update your payment method To update your payment method: 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu. 1. Navigate to the **Payment info** section of the page. 1. Locate **Payment method** and click **Edit**. If you are unable to update your payment method, please [contact support](https://neon.com/docs/introduction/support). ## Payment issues ### Missed payments If an auto-debit payment transaction fails, Neon sends a request to update your payment method. Late fees and payment policies are described in [Neon's Terms of Service](https://neon.com/terms-of-service). ### Failing payments for Indian customers Neon's billing system uses **Stripe Checkout**, which does not currently support **e-Mandates** — a requirement from the Reserve Bank of India (RBI) for automatic recurring payments. Because of this, customers in India cannot set up automatic monthly payments. In the event of a payment failure, please [contact support](https://neon.com/docs/introduction/support) to request a link to your invoice to complete the payment manually. ## Update your billing email To update your billing email: 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu. 1. Navigate to the **Payment info** section of the page. 1. Locate **Billing email** and click **Edit**. If you are unable to update your billing email, please [contact support](https://neon.com/docs/introduction/support). ## Invoices A Neon invoice includes the charges and the amount due for the billing period. For an explanation of what you've been billed for, see [Usage metrics](https://neon.com/docs/introduction/plans#usage-metrics). ### Download invoices To download an invoice: 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu. 1. Navigate to the **Invoices** section of the page. 1. Find the invoice you want to download and select **Download** from the menu. **Note**: When an invoice is paid, Neon's billing system sends a payment confirmation email to the address associated with the Neon account. ### Request a refund If you find an issue with your invoice, you can request a refund. The request will be reviewed by the Neon billing team. 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu. 1. Click the "View past invoices" button. 1. Find the invoice you want to request a refund for, and select **Request credit note** from the menu. Enter a problem description explaining the reason for the request. ## Change your plan **Important**: You cannot upgrade or downgrade to a [legacy plan](https://neon.com/docs/introduction/legacy-plans). If you're currently on a legacy plan, you can only upgrade or downgrade to one of the [current usage-based pricing plans](https://neon.com/docs/introduction/plans). To upgrade or downgrade your plan: 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing** from the menu. 1. Select **Change plan**. Changing your plan to one with lower usage allowances may affect the performance of your applications. To compare plan allowances, see [Neon plans](https://neon.com/docs/introduction/plans#neon-plans). If you are downgrading your plan, you will be required to remove any projects, branches, or data that exceed your new plan allowances. To downgrade from a **legacy Enterprise plan**, please contact [Sales](https://neon.com/contact-sales). Cancellation of a legacy Enterprise plan is handled according to the Master Subscription Agreement (MSA) outlined in the Customer Agreement. ## How to prevent further monthly charges to your account If you're on a Neon paid plan, you need to downgrade to the Free plan to avoid further monthly charges. You can do so from the [Billing](https://console.neon.tech/app/billing#change_plan) page in the Neon Console. Simply removing all Neon projects will **not** stop the monthly fee associated with your plan. You will continue to be invoiced until you downgrade to Free. ## Delete your account If you would like to delete your Neon account entirely, please refer to the steps described here: [Deleting your account](https://neon.com/docs/manage/accounts#delete-account). --- # Source: https://neon.com/llms/introduction-monitor-active-queries.txt # Monitor active queries > The document "Monitor active queries" details how Neon users can track and manage active database queries, offering guidance on utilizing Neon's tools for real-time query monitoring and performance assessment. ## Source - [Monitor active queries HTML](https://neon.com/docs/introduction/monitor-active-queries): The original HTML version of this documentation You can monitor active queries for your Neon project from the **Monitoring** page in the Neon Console. 1. In the Neon Console, select a project. 2. Go to **Monitoring**. 3. Select the **Active queries** tab. The **Active queries** view displays up to 100 currently running queries for the selected **Branch**, **Compute**, and **Database**. Use the **Refresh** button to update the list with the latest active queries. The **Active queries** view is powered by the `pg_stat_activity` Postgres system view, which is available in Neon by default. To run custom queries against the data collected by `pg_stat_activity`, you can use the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client, such as [psql](https://neon.com/docs/connect/query-with-psql-editor). For details on `pg_stat_activity`, see [pg_stat_activity](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW) in the PostgreSQL documentation. **Note** active queries retention: In Neon, the `pg_stat_activity` system view only holds data on currently running queries. Once a query completes, it no longer appears in the **Active queries** view. If your Neon compute scales down to zero due to inactivity, there will be no active queries until a new connection is established and a query is run. --- # Source: https://neon.com/llms/introduction-monitor-pgadmin.txt # Monitor Neon with pgAdmin > The document details the process for monitoring Neon databases using pgAdmin, including setup instructions and configuration steps specific to Neon's environment. ## Source - [Monitor Neon with pgAdmin HTML](https://neon.com/docs/introduction/monitor-pgadmin): The original HTML version of this documentation pgAdmin is a database management tool for Postgres designed to facilitate various database tasks, including monitoring performance metrics. With pgAdmin, you can monitor real-time activity for a variety of metrics including: - Active sessions (Total, Active, and Idle) - Transactions per second (Transactions, Commits, Rollbacks) - Tuples in (Inserts, Updates, Deletes) - Tuples out (Fetched, Returned) - Block I/O for shared buffers (see [Cache your data](https://neon.com/docs/postgresql/query-performance#cache-your-data) for information about Neon's Local File Cache) - Database activity (Sessions, Locks, Prepared Transactions) **Note** Notes: Neon currently does not support the `system_stats` extension required to use the **System Statistics** tab in pgAdmin. It's also important to note that pgAdmin, while active, polls your database for statistics, which does not allow your compute to suspend as it normally would when there is no other database activity. ## How to install pgAdmin Pre-compiled and configured installation packages for pgAdmin 4 are available for different desktop environments. For installation instructions, refer to the [pgAdmin deployment documentation](https://www.pgadmin.org/docs/pgadmin4/latest/deployment.html). Downloads can be found on the [PgAdmin Downloads](https://www.pgadmin.org/download/) page. ## How to connect to your database from pgAdmin Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Enter your connection details as shown [here](https://neon.com/docs/connect/connect-postgres-gui#connect-to-the-database). Neon uses the default Postgres port: `5432` --- # Source: https://neon.com/llms/introduction-monitor-pghero.txt # Monitor Neon with PgHero > The document explains how to use PgHero to monitor Neon databases, detailing the setup process and key features for performance analysis and optimization within the Neon environment. ## Source - [Monitor Neon with PgHero HTML](https://neon.com/docs/introduction/monitor-pghero): The original HTML version of this documentation [PgHero](https://github.com/pghero/pghero) is an open-source performance tool for Postgres that can help you find and fix data issues, using a dashboard interface. A quick look at the interface gives you an idea of what you'll find in PgHero. Among other things, you can use PgHero to: - Identify long-running queries - Identify tables that require vacuuming - Identify duplicate or missing indexes - View connections by database and user - Explain, analyze, and visualize queries **Note**: Neon does not currently support monitoring tools or platforms that require installing an agent on the Postgres host system, but please keep an eye on our [roadmap](https://neon.com/docs/introduction/roadmap) for future integrations that enable these monitoring options. ## How to install PgHero PgHero supports installation with [Docker](https://github.com/ankane/pghero/blob/master/guides/Docker.md), [Linux](https://github.com/ankane/pghero/blob/master/guides/Linux.md), and [Rails](https://github.com/ankane/pghero/blob/master/guides/Rails.md). Here, we'll show how to install PgHero with Docker and connect it to a Neon database. Before you begin: - Ensure that you have the [pg_stat_statements](https://neon.com/docs/extensions/pg_stat_statements) extension installed. PgHero uses it for query stats. - Ensure that you have Docker installed. See [Install Docker Engine](https://docs.docker.com/engine/install/) for instructions. PgHero is available on [DockerHub](https://hub.docker.com/r/ankane/pghero/). To install it, run: ``` docker pull ankane/pghero ``` ## How to connect to your database from PgHero Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Finally, run this command, replacing `$NEON_DB` with your Neon database connection string. ``` docker run -ti -e DATABASE_URL='$NEON_DB' -p 8080:8080 ankane/pghero ``` Then visit http://localhost:8080 in your browser to open the PgHero Dashboard. --- # Source: https://neon.com/llms/introduction-monitor-query-performance.txt # Monitor query performance > The document outlines methods for monitoring query performance in Neon, detailing tools and techniques to analyze and optimize database queries effectively. ## Source - [Monitor query performance HTML](https://neon.com/docs/introduction/monitor-query-performance): The original HTML version of this documentation You can monitor query history for your Neon project from the **Monitoring** page in the Neon Console. 1. In the Neon Console, select a project. 2. Go to **Monitoring**. 3. Select the **Query performance** tab. The **Query performance** view shows the top 100 previously run queries for the selected **Branch**, **Compute**, and **Database**. Queries are grouped by their normalized form, with identical queries shown as a single row with a **Frequency** column indicating the number of times that query has been executed. Queries can be sorted by **Frequency** or **Average time**. Use the **Refresh** button to load the latest queries. The **Query performance** view is powered by the `pg_stat_statements` Postgres extension, installed on a system managed database in your Postgres instance. Query history includes all queries run against your database, regardless of where they were issued from (Neon SQL Editor, external clients, or applications). **Note** query restore window: In Neon, data collected by the `pg_stat_statements` extension is not retained when your Neon compute (where Postgres runs) is suspended or restarted. For example, if your compute scales down to zero due to inactivity, your query history is lost. New data will be gathered once your compute restarts. ## Running your own queries To run your own queries on `pg_stat_statements` data, you can install the `pg_stat_statements` extension to your database and run your queries from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client, such as [psql](https://neon.com/docs/connect/query-with-psql-editor). For details on `pg_stat_statements`, including how to install it, what data it collects, and queries you can run, refer to our [pg_stat_statements](https://neon.com/docs/extensions/pg_stat_statements) extension guide. --- # Source: https://neon.com/llms/introduction-monitor-usage.txt # Monitor billing and usage > The document outlines how Neon users can monitor their billing and usage through the Neon Console, detailing steps to access usage metrics and billing information for effective account management. ## Source - [Monitor billing and usage HTML](https://neon.com/docs/introduction/monitor-usage): The original HTML version of this documentation Neon exposes usage metrics in the Neon Console and through the Neon API. For an explanation of Neon's usage metrics, see [Usage metrics](https://neon.com/docs/introduction/plans#usage-metrics). ## View usage metrics in the Neon Console Usage metrics in the console can be found on the **Billing** page. ### Billing page You can monitor billing and usage for your Neon account from the **Billing** page in the Neon Console. 1. Navigate to the Neon Console. 1. Select your organization from the breadcrumb menu at the top-left of the console. 1. Select **Billing**. Here you will find the current bill and usage for your Neon account. Usage metrics on the **Billing page** include: - Compute, CU-hour - Extra branches, branch-month - Instant restore storage, GB-month - Private data transfer, GB - Public data transfer, GB - Storage (root branches), GB-month - Storage (child branches), GB-month These are the usage metrics you'll find on your monthly invoice, if they apply to your plan. For an explanation of usage metrics, refer to the [Plan feature](https://neon.com/docs/introduction/plans#plan-features) explanations. **Note** note: billing metrics for pre-2025 custom contract customers: If you signed a contract with Neon prior to 01/01/2025, different billing metrics apply: - **Storage** is measured in GiBs instead of [GB-month](https://neon.com/docs/reference/glossary#gb-month), and if you exceed your contract's monthly storage allowance, extra storage units are automatically allocated and billed. Extra storage charges are applied based on the number of additional storage units needed to cover peak storage usage during the current billing period, prorated from the date the extra storage was allocated. Peak usage resets at the beginning of the next billing period. - **Written data** is the total volume of data written from compute to storage over the during the monthly billing period, measured in gigibytes (GiB). If you have questions or want to change the billing metrics defined in your contract, please contact your Neon sales representative. ## Retrieve usage metrics with the Neon API You can retrieve a variety of usage metrics using the Neon API. **Tip** monitoring usage for a large number of projects: Enterprise users can use Neon's advanced `consumption` endpoints to monitor account and project usage. These endpoints are recommended when monitoring usage for a large number of projects. See [Querying consumption metrics](https://neon.com/docs/guides/consumption-metrics). Any user can query usage metrics for a branch or a project, as described below. See: - [Get branch details](https://neon.com/docs/introduction/monitor-usage#get-branch-details) - [Get project details](https://neon.com/docs/introduction/monitor-usage#get-project-details) ### Get branch details This example shows how to retrieve branch details using the [Get branch details](https://api-docs.neon.tech/reference/getprojectbranch) API method. Usage data is highlighted. Refer to the response body section of the [Get branch details](https://api-docs.neon.tech/reference/getprojectbranch) documentation for descriptions. ```curl curl --request GET \ --url https://console.neon.tech/api/v2/projects/summer-bush-30064139/branches/br-polished-flower-a5tq1sdv \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` **Response body** ```json {7,11-15} { "branch": { "id": "br-polished-flower-a5tq1sdv", "project_id": "summer-bush-30064139", "name": "main", "current_state": "ready", "logical_size": 427474944, "creation_source": "console", "default": true, "protected": false, "cpu_used_sec": 2505, "compute_time_seconds": 2505, "active_time_seconds": 9924, "written_data_bytes": 1566733560, "data_transfer_bytes": 40820887, "created_at": "2024-04-02T12:54:33Z", "updated_at": "2024-04-10T17:43:21Z" } } ``` ### Get project details This example shows how to retrieve project details using the [Get project details](https://api-docs.neon.tech/reference/getproject) API method. Usage data is highlighted. Refer to the response body section of the [Get project details](https://api-docs.neon.tech/reference/getproject) documentation for descriptions. ```curl curl --request GET \ --url https://console.neon.tech/api/v2/projects/summer-bush-30064139 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' |jq ``` **Response body** ```json {3-8,36} { "project": { "data_storage_bytes_hour": 113808080168, "data_transfer_bytes": 40821459, "written_data_bytes": 1566830744, "compute_time_seconds": 2785, "active_time_seconds": 11024, "cpu_used_sec": 2785, "id": "summer-bush-30064139", "platform_id": "aws", "region_id": "aws-us-east-2", "name": "summer-bush-30064139", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false, "protected_branches_only": false }, "enable_logical_replication": false }, "pg_version": 16, "proxy_host": "us-east-2.aws.neon.tech", "branch_logical_size_limit": 204800, "branch_logical_size_limit_bytes": 214748364800, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2024-04-02T12:54:33Z", "updated_at": "2024-04-10T17:26:07Z", "synthetic_storage_size": 492988552, "consumption_period_start": "2024-04-02T12:54:33Z", "consumption_period_end": "2024-05-01T00:00:00Z", "quota_reset_at": "2024-05-01T00:00:00Z", "owner_id": "8d5f604c-d04e-4795-baf7-e87909a5d959", "owner": { "email": "alex@domain.com", "branches_limit": -1, "subscription_type": "launch" }, "compute_last_active_at": "2024-04-10T17:26:05Z" } } ``` **Tip** Optimize your costs: For strategies to reduce your Neon costs across compute, storage, branches, and data transfer, see our [Cost optimization](https://neon.com/docs/introduction/cost-optimization) guide. --- # Source: https://neon.com/llms/introduction-monitoring-page.txt # Monitoring dashboard > The Monitoring Dashboard documentation explains how to use Neon's interface to track database performance metrics, including query execution times and resource utilization, enabling users to effectively manage and optimize their database operations. ## Source - [Monitoring dashboard HTML](https://neon.com/docs/introduction/monitoring-page): The original HTML version of this documentation The **Monitoring** dashboard in the Neon console provides several graphs for monitoring system and database metrics. You can access the **Monitoring** dashboard from the sidebar in the Neon Console. Observable metrics include: - [RAM](https://neon.com/docs/introduction/monitoring-page#ram) - [CPU](https://neon.com/docs/introduction/monitoring-page#cpu) - [Connections count](https://neon.com/docs/introduction/monitoring-page#connections-count) - [Database size](https://neon.com/docs/introduction/monitoring-page#database-size) - [Deadlocks](https://neon.com/docs/introduction/monitoring-page#deadlocks) - [Rows](https://neon.com/docs/introduction/monitoring-page#rows) - [Replication delay bytes](https://neon.com/docs/introduction/monitoring-page#replication-delay-bytes) - [Replication delay seconds](https://neon.com/docs/introduction/monitoring-page#replication-delay-seconds) - [Local file cache hit rate](https://neon.com/docs/introduction/monitoring-page#local-file-cache-hit-rate) - [Working set size](https://neon.com/docs/introduction/monitoring-page#working-set-size) Your Neon plan defines the range of data you can view. | Neon Plan | Data Access | | ----------------------------------------------- | ------------------------ | | Free | 1 day | | Launch | 3 days | | Scale | 14 days | You can select different periods or a custom period within the permitted range from the menu on the dashboard. The dashboard displays metrics for the selected **Branch** and **Compute**. Use the drop-down menus to view metrics for a different branch or compute. Use the **Refresh** button to update the displayed metrics. If your compute was idle or there has not been much activity, graphs may display this message: `There is no data to display at the moment`. In this case, try selecting a different time period or returning later after more usage data has been collected. All time values displayed in graphs are in [Coordinated Universal Time (UTC)](https://en.wikipedia.org/wiki/Coordinated_Universal_Time). **Note** Endpoint Inactive: What does it mean?: The values and plotted lines in your graphs will drop to `0` when your compute is inactive because a compute must be active to report data. These inactive periods are also shown as a diagonal line pattern in the graph, as shown here: ### RAM This graph shows allocated RAM and usage over time for the selected compute. **ALLOCATED**: The amount of allocated RAM. RAM is allocated according to the size of your compute or your [autoscaling](https://neon.com/docs/guides/autoscaling-guide) configuration, if applicable. For example, if your compute size is .25 CU (.25 vCPU with 1 GB RAM), your allocated RAM is always 1 (GB). With autoscaling, allocated RAM increases and decreases as your compute size scales up and down in response to load. If [scale to zero](https://neon.com/docs/guides/scale-to-zero-guide) is enabled and your compute transitions to an idle state after a period of inactivity, allocated RAM drops to 0. **Used**: The amount of RAM used. The graph plots a line showing the amount of RAM used. If the line regularly reaches the maximum amount of allocated RAM, consider increasing your compute size to increase the amount of allocated RAM. To see the amount of RAM allocated for each Neon compute size, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). **Cached**: The amount of data cached in memory. ### CPU This graph shows the amount of allocated CPU and usage over time for the selected compute. **ALLOCATED**: The amount of allocated CPU. CPU is allocated according to the size of your compute or your [autoscaling](https://neon.com/docs/guides/autoscaling-guide) configuration, if applicable. For example, if your compute size is .25 CU (.25 vCPU with 1 GB RAM), your allocated CPU is always 0.25. With autoscaling, allocated CPU increases and decreases as your compute size scales up and down in response to load. If [scale to zero](https://neon.com/docs/guides/scale-to-zero-guide) is enabled and your compute transitions to an idle state after a period of inactivity, allocated CPU drops to 0. **Used**: The amount of CPU used, in [Compute Units (CU)](https://neon.com/docs/reference/glossary#compute-unit-cu). If the plotted line regularly reaches the maximum amount of allocated CPU, consider increasing your compute size. To see the compute sizes available with Neon, see [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). ### Connections count The **Connections count** graph shows the number of idle connections, active connections, and the total number of connections over time for the selected compute. **ACTIVE**: The number of active connections for the selected compute. Monitoring active connections can help you understand your database workload at any given time. If the number of active connections is consistently high, it might indicate that your database is under heavy load, which could lead to performance issues such as slow query response times. See [Connections](https://neon.com/docs/postgresql/query-reference#connections) for related SQL queries. **IDLE**: The number of idle connections for the selected compute. Idle connections are those that are open but not currently being used. While a few idle connections are generally harmless, a large number of idle connections can consume unnecessary resources, leaving less room for active connections and potentially affecting performance. Identifying and closing unnecessary idle connections can help free up resources. See [Find long-running or idle connections](https://neon.com/docs/postgresql/query-reference#find-long-running-or-idle-connections). **TOTAL**: The sum of active and idle connections for the selected compute. **MAX**: The maximum number of simultaneous connections allowed for your compute size. The MAX line helps you visualize how close you are to reaching your connection limit. When your TOTAL connections approach the MAX line, you may want to consider: - Increasing your compute size to allow for more connections - Implementing [connection pooling](https://neon.com/docs/connect/connection-pooling), which supports up to 10,000 simultaneous connections - Optimizing your application's connection management The connection limit (defined by the Postgres `max_connections` setting) is set according to your Neon compute size configuration. For the connection limit for each Neon compute size, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). ### Database size The **Database size** graph shows the logical data size (the size of your actual data) for the named database and the total size for all user-created databases (**All Databases**) on the selected branch. The **All Databases** metric is only shown when there is more than one database on the selected branch. **Important**: Database size metrics are only displayed while your compute is active. When your compute is idle, database size values are not reported, and the **Database size** graph shows zero even though data may be present. ### Deadlocks The **Deadlocks** graph shows a count of deadlocks over time for the named database on the selected branch. The named database is always the oldest database on the selected branch. Deadlocks occur in a database when two or more transactions simultaneously block each other by holding onto resources the other transactions need, creating a cycle of dependencies that prevent any of the transactions from proceeding, potentially leading to performance issues or application errors. For lock-related queries you can use to investigate deadlocks, see [Performance tuning](https://neon.com/docs/postgresql/query-reference#performance-tuning). To learn more about deadlocks in Postgres, see [Deadlocks](https://www.postgresql.org/docs/current/explicit-locking.html). ### Rows The **Rows** graph shows the number of rows deleted, updated, and inserted over time for the named database on the selected branch. The named database is always the oldest database on the selected branch. Row metrics are reset to zero whenever your compute restarts. Tracking rows inserted, updated, and deleted over time provides insights into your database's activity patterns. You can use this data to identify trends or irregularities, such as insert spikes or an unusual number of deletions. **Note**: Row metrics only capture row-level changes (`INSERT`, `UPDATE`, `DELETE`, etc.) and exclude table-level operations such as `TRUNCATE`. ### Replication delay bytes The **Replication delay bytes** graph shows the total size, in bytes, of the data that has been sent from the primary compute but has not yet been applied on the replica. A larger value indicates a higher backlog of data waiting to be replicated, which may suggest issues with replication throughput or resource availability on the replica. This graph is only visible when selecting a **Replica** compute from the **Compute** drop-down menu. ### Replication delay seconds The **Replication delay seconds** graph shows the time delay, in seconds, between the last transaction committed on the primary compute and the application of that transaction on the replica. A higher value suggests that the replica is behind the primary, potentially due to network latency, high replication load, or resource constraints on the replica. This graph is only visible when selecting a **Replica** compute from the **Compute** drop-down menu. ### Local file cache hit rate The **Local file cache hit rate** graph shows the percentage of read requests served from Neon's Local File Cache (LFC). Queries not served from either Postgres shared buffers or the Local File Cache retrieve data from storage, which is more costly and can result in slower query performance. To learn more about how Neon caches data and how the LFC works with Postgres shared buffers, see [What is the Local File Cache?](https://neon.com/docs/extensions/neon#what-is-the-local-file-cache) ### Working set size Your working set is the size of the distinct set of Postgres pages (relation data and indexes) accessed in a given time interval - to optimize for performance and consistent latency it is recommended to size your compute so that the working set fits into Neon's [Local File Cache (LFC)](https://neon.com/docs/extensions/neon#what-is-the-local-file-cache) for quick access. The **Working set size** graph visualizes the amount of data accessed—calculated as unique pages accessed × page size—over a given interval. Here's how to interpret the graph: - **5m** (5 minutes): This line shows the data accessed in the last 5 minutes. - **15m** (15 minutes): Similar to the 5-minute window, this metric tracks the data accessed in the last 15 minutes. - **1h** (1 hour): This line represents the data accessed in the last hour. - **Local file cache size**: This is the size of the LFC, which is determined by the size of your compute. Larger computes have larger caches. For cache sizes, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). For optimal performance the local file cache should be larger than your working set size for a given time interval. If your working set size is larger than the LFC size it is recommended to increase the maximum size of the compute to improve the LFC hit rate and achieve good performance. If your workload pattern doesn't change much over time it is recommended to compare the 1h time interval working set size with the LFC size and make sure that working set size is smaller than LFC size. --- # Source: https://neon.com/llms/introduction-monitoring.txt # Monitoring in Neon > The "Monitoring in Neon" documentation outlines the tools and procedures for tracking database performance and health, enabling users to efficiently manage and troubleshoot their Neon database environments. ## Source - [Monitoring in Neon HTML](https://neon.com/docs/introduction/monitoring): The original HTML version of this documentation To find out what's going on with your Neon projects and databases, Neon offers several ways to track metrics and monitor usage. - [Monitoring dashboard](https://neon.com/docs/introduction/monitoring-page): View system and database metrics on the Neon Monitoring dashboard - [Monitor billing and usage](https://neon.com/docs/introduction/monitor-usage): Monitor billing and usage metrics for your Neon account and projects - [Autoscaling](https://neon.com/docs/guides/autoscaling-guide#monitor-autoscaling): Monitor Autoscaling vCPU and RAM usage - [Neon system operations](https://neon.com/docs/manage/operations): Monitor Neon project operations from the Neon Console, API, or CLI - [Active Queries](https://neon.com/docs/introduction/monitor-active-queries): View and analyze running queries in your database - [Query performance](https://neon.com/docs/introduction/monitor-query-performance): View and analyze query performance for your Neon database - [pg_stat_statements](https://neon.com/docs/extensions/pg_stat_statements): Monitor query performance and statistics in Postgres with pg_stat_statements ## Datadog integration - [Datadog](https://neon.com/docs/guides/datadog): Export Neon Metrics to Datadog with the Neon Datadog Integration ## OpenTelemetry - [OTel integration](https://neon.com/docs/guides/opentelemetry): Export Neon metrics to any OpenTelemetry-compatible observability platform - [Grafana Cloud](https://neon.com/docs/guides/grafana-cloud): Export Neon metrics and logs to Grafana Cloud with native OTLP integration - [Better Stack](https://neon.com/guides/betterstack-otel-neon): Monitor Neon with Better Stack using OpenTelemetry integration - [New Relic](https://neon.com/guides/newrelic-otel-neon): Monitor Neon with New Relic using OpenTelemetry integration - [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs): Metrics and logs reference for monitoring ## Other monitoring tools - [pgAdmin](https://neon.com/docs/introduction/monitor-pgadmin): Monitor your Neon Postgres database with pgAdmin - [PgHero](https://neon.com/docs/introduction/monitor-pghero): Monitor your Neon Postgres database with PgHero ## Feedback and future improvements At Neon, we understand that observability and monitoring are critical for running successful applications. If you've got feature requests or feedback about what you'd like to see in Neon monitoring and observability features, let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/introduction-plans.txt # Neon plans > The document outlines the various subscription plans available for Neon users, detailing the features and limitations of each plan to assist in selecting the appropriate service level for their needs. ## Source - [Neon plans HTML](https://neon.com/docs/introduction/plans): The original HTML version of this documentation Neon offers plans to support you at every stage—from your first prototype to production at scale. Start for free, then **pay only for what you use** as your needs grow. > The plans described on this page are Neon's new usage-based pricing plans, introduced **August 14, 2025**. If you signed up for a paid plan earlier, you may still be on a [legacy plan](https://neon.com/docs/introduction/legacy-plans). To switch to a new plan, see [Change your plan](https://neon.com/docs/introduction/manage-billing#change-your-plan). Free plan users were automatically moved to the new Free plan described below, unless you signed up through **Azure Marketplace**. **Important**: If you signed up with Neon through **Azure Marketplace**, you are still on a [Neon legacy plan](https://neon.com/docs/introduction/legacy-plans) — this applies to both Free and paid plans. --- ## Plan overview Compare Neon's **Free**, **Launch**, and **Scale** plans. **Comingsoon** Building an agent platform?: For AI agent platforms that provision thousands of databases, Neon offers an **Agent Plan** with custom resource limits and credits for **your** free tier. [Learn more](https://neon.com/docs/introduction/agent-plan) | Plan feature | **Free** | **Launch** | **Scale** | | ----------------------------------------------------- | ------------------------------ | ------------------------------------ | ------------------------------------------------------------------------------------------------- | | [Price](https://neon.com/docs/introduction/plans#price) | $0/month | $5/month minimum | $5/month minimum | | [Who it's for](https://neon.com/docs/introduction/plans#who-its-for) | Prototypes and side projects | Startups and growing teams | Production-grade workloads and larger companies | | [Projects](https://neon.com/docs/introduction/plans#projects) | 100 | 100 | 1,000 (can be increased on request) | | [Branches](https://neon.com/docs/introduction/plans#branches) | 10/project | 10/project | 25/project | | [Extra branches](https://neon.com/docs/introduction/plans#extra-branches) | — | $1.50/branch-month (prorated hourly) | $1.50/branch-month (prorated hourly) | | [Compute](https://neon.com/docs/introduction/plans#compute) | 100 CU-hours/project | $0.106/CU-hour | $0.222/CU-hour | | [Autoscaling](https://neon.com/docs/introduction/plans#autoscaling) | Up to 2 CU (2 vCPU / 8 GB RAM) | Up to 16 CU (16 vCPU / 64 GB RAM) | Up to 16 CU (fixed computes up to 56 vCPU / 224 GB RAM) | | [Scale to zero](https://neon.com/docs/introduction/plans#scale-to-zero) | After 5 min | After 5 min, can be disabled | Configurable (1 minute to always on) | | [Storage](https://neon.com/docs/introduction/plans#storage) | 0.5 GB/project | $0.35/GB-month | $0.35/GB-month | | [Public network transfer](https://neon.com/docs/introduction/plans#public-network-transfer) | 5 GB included | 100 GB included, then $0.10/GB | 100 GB included, then $0.10/GB | | [Monitoring](https://neon.com/docs/introduction/plans#monitoring) | 1 day | 3 days | 14 days | | [Metrics/logs export](https://neon.com/docs/introduction/plans#metricslogs-export) | — | — | ✅ | | [Instant restore](https://neon.com/docs/introduction/plans#instant-restore) | — | $0.20/GB-month | $0.20/GB-month | | [Restore window](https://neon.com/docs/introduction/plans#restore-window) | 6 hours, up to 1 GB-month | Up to 7 days | Up to 30 days | | [Private network transfer](https://neon.com/docs/introduction/plans#private-network-transfer) | — | — | $0.01/GB | | [Compliance and security](https://neon.com/docs/introduction/plans#compliance-and-security) | — | Protected branches | SOC 2, ISO, GDPR, [HIPAA](https://neon.com/docs/security/hipaa), Protected branches, IP Allow, Private Networking | | [Uptime SLA](https://neon.com/docs/introduction/plans#uptime-sla) | — | — | ✅ | | [Support](https://neon.com/docs/introduction/plans#support) | Community | Standard (billing issues only) | Standard, Business, or Production | ## Plan features This section describes the features listed in the [Plan overview](https://neon.com/docs/introduction/plans#plan-overview) table. **Tip** Optimize your costs: Learn how to manage your Neon costs effectively with our [cost optimization guide](https://neon.com/docs/introduction/cost-optimization), which covers strategies for compute, storage, branches, and data transfer. ### ☑ Price **Price** is the minimum monthly fee for the plan. This is the minimum amount you'll be billed if your usage is less than $5. > If you sign up for a paid plan part way through the month, the minimum monthly fee amount is proroated from the sign-up date. For **Launch** and **Scale**, the minimum monthly fee is $5. Usage for compute, storage, extra branches, and other features is billed at the published rates (see the [Plan overview](https://neon.com/docs/introduction/plans#plan-overview) table). On the **Free** plan, there is no monthly cost. You get usage allowances for projects, branches, compute, storage, and more — for $0/month. ### ☑ Who it's for - **Free** — Prototypes, side projects, and quick experiments. Includes 100 projects, 100 CU-hours/project, 0.5 GB storage per branch, and 5 GB of egress. Upgrade if you need more resources or features. - **Launch** — Startups and growing teams needing more resources, features, and flexibility. Usage-based pricing starts at $5/month. - **Scale** — Production-grade workloads and large teams. Higher limits, advanced features, full support, compliance, additional security, and SLAs. Usage-based pricing starts at $5/month. ### ☑ Projects A project is a container for your database environment. It includes your database, branches, compute resources, and more. Similar to a Git repository that contains code, artifacts, and branches, a project contains all your database resources. Learn more about [Neon's object hierarchy](https://neon.com/docs/manage/overview). > For most use cases, create a project for each app or customer to isolate data and manage resources. Included per plan: - **Free**: 100 projects - **Launch**: 100 projects - **Scale**: 1,000 projects (soft limit — request more if needed via [support](https://neon.com/docs/introduction/support)) ### ☑ Branches Each Neon project is created with a [root branch](https://neon.com/docs/reference/glossary#root-branch), like the `main` branch in Git. Postgres objects — databases, schemas, tables, records, indexes, roles — are created on a branch. You can create [child branches](https://neon.com/docs/reference/glossary#child-branch) for testing, previews, or development. Included per plan: - **Free**: 10 branches/project - **Launch**: 10 branches/project - **Scale**: 25 branches/project See [Extra branches](https://neon.com/docs/introduction/plans#extra-branches) for overage costs and [Storage](https://neon.com/docs/introduction/plans#storage) for how branch storage is billed. > Projects can have multiple root branches, with limits based on your plan. See [Branch types: Root branch](https://neon.com/docs/manage/branches#root-branch) for details. ### ☑ Extra branches On paid plans, you can create extra child branches. Extra branches beyond your plan's branch allowance (outlined [above](https://neon.com/docs/introduction/plans#-branches)) are billed in **branch-months**, metered hourly. 1 extra branch × 1 month = 1 branch-month Cost: **$1.50/branch-month** (~$0.002/hour). Example: The Launch plan includes 10 branches/project. You create 2 extra branches for 5 hours each → 10 extra branch-hours × $0.002/hour = ~$0.20 total. > Extra branches are not available on the Free plan. Delete branches or upgrade if you need more. The maximum number of branches you can have per project: - **Launch**: 5,000 branches/project - **Scale**: 5,000 branches/project If you need more, contact [Sales](https://neon.com/contact-sales). ### ☑ Compute Compute usage depends on compute size and runtime. - Measured in **CU-hours** (Compute Unit hours) - 1 CU = 1 vCPU + 4 GB RAM - RAM scales at a 4:1 ratio (4 GB RAM per 1 vCPU) - Compute sizes up to 56 CU (plan-dependent) | Compute Unit | vCPU | RAM | | ------------ | ---- | ------ | | .25 | .25 | 1 GB | | .5 | .5 | 2 GB | | 1 | 1 | 4 GB | | 2 | 2 | 8 GB | | 3 | 3 | 12 GB | | ... | ... | ... | | 56 | 56 | 224 GB | Formula: ``` compute size × hours running = CU-hours ``` Examples: - 0.25 CU for 4 hours = 1 CU-hour - 2 CU for 3 hours = 6 CU-hours - 8 CU for 2 hours = 16 CU-hours **Free**: 100 CU-hours/project/month (enough to run a 0.25 CU compute in a project for 400 hours/month). **Launch**: $0.106/CU-hour **Scale**: $0.222/CU-hour > All computes in your project count toward usage. Each branch has a read-write compute by default; [read replicas](https://neon.com/docs/reference/glossary#read-replica) add read-only computes. #### Compute with autoscaling Autoscaling changes compute size between a defined min and max. Estimate usage as: ``` average compute size × hours running = CU-hours ``` #### Compute with scale to zero Scale to zero suspends computes after inactivity to compute usage and cost. ### ☑ Autoscaling Adjusts compute size between defined limits based on demand. - **Free**: Up to 2 CU (2 vCPU / 8 GB RAM) - **Launch**: Up to 16 CU (16 vCPU / 64 GB RAM) - **Scale**: Up to 16 CU for autoscaling; fixed sizes up to 56 CU (vCPU / 224 GB RAM) > Autoscaling is capped at 16 CU. Scale supports fixed computes above 16 CU. ### ☑ Scale to zero Suspends computes after inactivity. - **Free**: 5 min inactivity — cannot disable - **Launch**: 5 min inactivity — can disable - **Scale**: Fully configurable — 1 minute to always on ### ☑ Storage Storage is your data size, billed on actual usage in **GB-months**, measured hourly. - **Launch**/**Scale plan storage cost**: $0.35/GB-month - **[Root branches](https://neon.com/docs/reference/glossary#root-branch)**: billed on actual data size (_logical data size_) - **[Child branches](https://neon.com/docs/reference/glossary#child-branch)**: billed on the minimum of the data changes since creation or the logical data size When a child branch is created, it adds no storage initially. Once you make writes (inserts, updates, or deletes) to the child branch, the delta grows and counts toward storage. **Child branch storage is capped at your actual data size** — you're billed for the minimum of accumulated changes or logical data size, whichever is lower. **Important** Manage child branches to control storage costs: Even though child branch storage is capped at your logical data size, it's still important to manage branches effectively to minimize storage costs: - Set a [time to live](https://neon.com/docs/guides/branch-expiration) on development and preview branches - Delete child branches when they're no longer needed - For production workloads, use a [root branch](https://neon.com/docs/manage/branches#root-branch) instead — root branches are billed on your actual data size with no delta tracking overhead. > **Free** plan users get 0.5 GB of storage per project ### ☑ Public network transfer Public network transfer (egress) is the total volume of data sent from your databases over the public internet during the monthly billing period. > Public network transfer includes data sent via [logical replication](https://neon.com/docs/reference/glossary#logical-replication) to any destination, including other Neon databases. Allowances per plan: - **Free**: 5 GB/month - **Launch**: 100 GB/month, then $0.10/GB - **Scale**: 100 GB/month, then $0.10/GB ### ☑ Monitoring View metrics such as RAM, CPU, connections, and database size in the **Monitoring** dashboard in the Neon Console. Retention of metrics data differs by plan: - **Free**: 1 day - **Launch**: 3 days - **Scale**: 14 days See [Monitoring dashboard](https://neon.com/docs/introduction/monitoring-page) for details. ### ☑ Metrics/logs export Export metrics and Postgres logs to [Datadog](https://neon.com/docs/guides/datadog) or any [OTel-compatible platform](https://neon.com/docs/guides/opentelemetry). Available only on the **Scale** plan. ### ☑ Instant restore Neon stores a change history to support instant restore. - **Free**: No charge, 6-hour limit, capped at 1 GB of change history - **Launch**: Up to 7 days, billed at $0.20/GB-month - **Scale**: Up to 30 days, billed at $0.20/GB-month You can change your [restore window](https://neon.com/docs/introduction/plans#restore-window) to control how much change history you retain. See [Instant restore](https://neon.com/docs/introduction/branch-restore) for details. > The change history is a log of write operations in the form of Postgres [Write-Ahead Logs](https://neon.com/docs/reference/glossary#write-ahead-logging-wal). ### ☑ Restore window How far back you can restore data. The maximum restore window per plan: - **Free**: No charge, 6-hour limit, capped at 1 GB-month of changes - **Launch**: Up to 7 days - **Scale**: Up to 30 days > The restore window defaults are 6 hours for Free plan projects and 1 day for paid plan projects. The restore window is configurable. Shortening it can reduce [instant restore](https://neon.com/docs/introduction/plans#instant-restore) storage costs but limits how far back you can restore. See [Configure your restore window](https://neon.com/docs/manage/projects#configure-your-restore-window). ### ☑ Private network transfer Bi-directional data transfer to and from your databases over private networking. Private networking is available on the **Scale** plan. It uses [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html) to bypass the public internet. Billed at $0.01/GB for network transferred to and from Neon. You'll only see this on your bill if you enable this feature. ### ☑ Compliance and security Compliance certifications available on **Scale**: - SOC 2 - SOC 3 - ISO 27001 - ISO 27701 - GDPR - CCPA - HIPAA ([additional charge](https://neon.com/docs/security/hipaa)) Security features: - [Protected branches](https://neon.com/docs/guides/protected-branches) — safeguards for critical data (available on **Launch** and **Scale**) - [IP Allow](https://neon.com/docs/introduction/ip-allow) — restricts access to trusted IPs (available on **Scale**) - [Private Networking](https://neon.com/docs/guides/neon-private-networking) — secure private connections via AWS PrivateLink (available on **Scale**) ### ☑ Uptime SLA Guaranteed service availability is offered on the **Scale** plan. Contact [Sales](https://neon.com/contact-sales) for details. ### ☑ Support Support level by plan: - **Free**: Community support - **Launch**: Standard support (billing issues only) - **Scale**: Standard support, with Business or Production support plans available for an additional fee See [Support](https://neon.com/docs/introduction/support) for details. ## Usage metrics The following metrics may appear on your Neon invoice. Each metric represents a specific type of usage that contributes to your monthly bill. | **Metric** | **Description** | | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Compute (CU-hour)** | Total compute usage in **CU-hours** (Compute Unit hours). [Learn more](https://neon.com/docs/introduction/plans#compute). | | **Extra branches (branch-month)** | Number of extra branches beyond your plan allowance, metered hourly. [Learn more](https://neon.com/docs/introduction/plans#extra-branches). | | **Instant restore storage (GB-month)** | Storage used for **instant restore**, billed per GB-month. [Learn more](https://neon.com/docs/introduction/plans#instant-restore). | | **Storage (root branches, GB-month)** | Data storage for root branches, billed per GB-month. [Learn more](https://neon.com/docs/introduction/plans#storage). | | **Storage (child branches, GB-month)** | Data storage for child branches (minimum of delta or logical size), billed per GB-month. [Learn more](https://neon.com/docs/introduction/plans#storage). | | **Public network transfer (GB)** | Outbound data transfer (egress) from your databases to the public internet. [Learn more](https://neon.com/docs/introduction/plans#public-network-transfer). | | **Private network transfer (GB)** | Bi-directional data transfer to and from your databases over private networking (e.g., AWS PrivateLink). [Learn more](https://neon.com/docs/introduction/plans#private-network-transfer). | | **Minimum spend** | Minimum monthly fee for the plan before usage-based charges. [Learn more](https://neon.com/docs/introduction/plans#price). | ## Usage-based cost examples The following examples show what your monthly bill might look like on the **Launch** and **Scale** plans at different levels of usage. Each example includes compute, storage (root and child branches), and instant restore history. Your actual costs will depend on your specific workload, usage patterns, and configuration. > **Note:** The "billable days" shown below refer to **active compute time** — the total hours your compute is actively running in a month. Computes can scale to zero when idle, so you may accumulate these hours in shorter periods of usage throughout the month rather than running continuously. ### Launch plan - **Example 1 (less than $5 usage)** - Compute: ~10 CU-hours = 1 CU × 10 hours — **$1.06** _(10 CU-hours × $0.106/CU-hour)_ - Root branch storage: 2 GB — **$0.70** _(2 GB × $0.35/GB-month)_ - Child branch storage: 1 GB — **$0.35** _(1 GB × $0.35/GB-month)_ - Instant restore history: 1 GB — **$0.20** _(1 GB × $0.20/GB-month)_ **Subtotal:** **$2.31** **Minimum monthly fee:** **$5.00** **Amount due:** **$5.00** - **Example 2** - Compute: ~120 CU-hours = 1 CU × 120 hours (about 5 billable days) — **$12.72** _(120 CU-hours × $0.106/CU-hour)_ - Root branch storage: 20 GB — **$7.00** _(20 GB × $0.35/GB-month)_ - Child branch storage: 5 GB — **$1.75** _(5 GB × $0.35/GB-month)_ - Instant restore history: 10 GB — **$2.00** _(10 GB × $0.20/GB-month)_ **Amount due:** **$23.47** - **Example 3** - Compute: ~250 CU-hours = 2 CU × 125 hours (about 5.2 billable days) — **$26.50** _(250 CU-hours × $0.106/CU-hour)_ - Root branch storage: 40 GB — **$14.00** _(40 GB × $0.35/GB-month)_ - Child branch storage: 10 GB — **$3.50** _(10 GB × $0.35/GB-month)_ - Instant restore history: 20 GB — **$4.00** _(20 GB × $0.20/GB-month)_ **Amount due:** **$48.00** --- ### Scale plan - **Example 1** - Compute: ~1,700 CU-hours = 4 CU × 425 hours (about 17.7 billable days) — **$377.40** _(1,700 CU-hours × $0.222/CU-hour)_ - Root branch storage: 100 GB — **$35.00** _(100 GB × $0.35/GB-month)_ - Child branch storage: 25 GB — **$8.75** _(25 GB × $0.35/GB-month)_ - Instant restore history: 50 GB — **$10.00** _(50 GB × $0.20/GB-month)_ **Amount due:** **$431.15** - **Example 2** - Compute: ~2,600 CU-hours = 8 CU × 325 hours (about 13.5 billable days) — **$577.20** _(2,600 CU-hours × $0.222/CU-hour)_ - Root branch storage: 150 GB — **$52.50** _(150 GB × $0.35/GB-month)_ - Child branch storage: 40 GB — **$14.00** _(40 GB × $0.35/GB-month)_ - Instant restore history: 75 GB — **$15.00** _(75 GB × $0.20/GB-month)_ **Amount due:** **$658.70** ## FAQs --- # Source: https://neon.com/llms/introduction-read-replicas.txt # Neon Read Replicas > The "Neon Read Replicas" documentation explains how to set up and manage read replicas in Neon, enabling users to distribute read queries across multiple database instances for improved performance and scalability. ## Source - [Neon Read Replicas HTML](https://neon.com/docs/introduction/read-replicas): The original HTML version of this documentation Neon read replicas are independent computes designed to perform read operations on the same data as your primary read-write compute. Neon's read replicas do not replicate or duplicate data. Instead, read requests are served from the same storage, as shown in the diagram below. While your read-write queries are directed through your primary compute, read queries can be offloaded to one or more read replicas. You can instantly create read replicas for any branch in your Neon project and configure the amount of vCPU and memory allocated to each. Read replicas also support Neon's [Autoscaling](https://neon.com/docs/introduction/autoscaling) and [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) features, providing you with the same control over compute resources that you have with your primary compute. ## How are Neon read replicas different? - **No additional storage is required**: With read replicas reading from the same source as your primary read-write compute, no additional storage is required to create a read replica. Data is neither duplicated nor replicated. Creating a read replica involves spinning up a read-only compute instance, which takes a few seconds. - **You can create them almost instantly**: With no data replication required, you can create read replicas almost instantly. - **They are cost-efficient**: With no additional storage or transfer of data, costs associated with storage and data transfer are avoided. Neon's read replicas also benefit from Neon's [Autoscaling](https://neon.com/docs/introduction/autoscaling) and [Scale to Zero](https://neon.com/docs/manage/computes#scale-to-zero-configuration) features, which allow you to manage compute usage. - **They are instantly available**: You can allow read replicas to scale to zero when not in use without introducing lag. When a read replica starts up in response to a query, it is up to date with your primary read-write compute almost instantly. ## How do you create read replicas? You can create read replicas using the Neon Console, [Neon CLI](https://neon.com/docs/reference/neon-cli), or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), providing the flexibility required to integrate read replicas into your workflow or CI/CD processes. From the Neon Console, it's a simple **Add Read Replica** action on a branch. **Note**: You can add read replicas to a branch as needed to accommodate your workload. The Free plan is limited to a maximum of 3 read replica computes per project. From the CLI or API: Tab: CLI ```bash neon branches add-compute mybranch --type read_only ``` Tab: API ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/late-bar-27572981/endpoints \ --header 'Accept: application/json' \ --header "Authorization: Bearer $NEON_API_KEY" \ --header 'Content-Type: application/json' \ --data ' { "endpoint": { "type": "read_only", "branch_id": "br-young-fire-15282225" } } ' | jq ``` For more details and how to connect to a read replica, see [Create and manage Read Replicas](https://neon.com/docs/guides/read-replica-guide). ## Read Replica architecture The following diagram shows how your primary compute and read replicas send read requests to the same Pageserver, which is the component of the [Neon architecture](https://neon.com/docs/introduction/architecture-overview) that is responsible for serving read requests. Neon read replicas are asynchronous, which means they are _eventually consistent_. As updates are made by your primary compute, Safekeepers store the data changes durably until they are processed by Pageservers. At the same time, Safekeepers keep read replica computes up to date with the most recent changes to maintain data consistency. ## Cross-region support Neon only supports creating read replicas **in the same region** as your database. However, a cross-region replica setup can be achieved by creating a Neon project in a different region and replicating data to that project via [logical replication](https://neon.com/docs/guides/logical-replication-guide). For example, you can replicate data from a Neon project in a US region to a Neon project in a European region following our [Neon-to-Neon logical replication guide](https://neon.com/docs/guides/logical-replication-neon-to-neon). Read-only access to the replicated database can be managed at the application level. ## Use cases Neon's read replicas have a number of applications: - **Horizontal scaling**: Scale your application by distributing read requests across replicas to improve performance and increase throughput. - **Analytics queries**: Offloading resource-intensive analytics and reporting workloads to reduce load on the primary compute. - **Read-only access**: Granting read-only access to users or applications that don't require write permissions. ## Get started with read replicas To get started with read replicas, refer to our guides: - [Create and manage Read Replicas](https://neon.com/docs/guides/read-replica-guide): Learn how to create, connect to, configure, delete, and monitor read replicas - [Scale your app with Read Replicas](https://neon.com/docs/guides/read-replica-integrations): Scale your app with read replicas using built-in framework support - [Run analytics queries with Read Replicas](https://neon.com/docs/guides/read-replica-data-analysis): Leverage read replicas for running data-intensive analytics queries - [Run ad-hoc queries with Read Replicas](https://neon.com/docs/guides/read-replica-adhoc-queries): Leverage read replicas for running ad-hoc queries - [Provide read-only access with Read Replicas](https://neon.com/docs/guides/read-only-access-read-replicas): Leverage read replicas to provide read-only access to your data --- # Source: https://neon.com/llms/introduction-regions.txt # Regions > The "Regions" documentation for Neon outlines how to configure and manage database regions, enabling users to optimize data locality and performance within the Neon platform. ## Source - [Regions HTML](https://neon.com/docs/introduction/regions): The original HTML version of this documentation Neon offers project deployment in multiple AWS and Azure regions. To minimize latency between your Neon database and application, we recommend choosing the region closest to your application server. ## AWS regions - 🇺🇸 AWS US East (N. Virginia) — `aws-us-east-1` - 🇺🇸 AWS US East (Ohio) — `aws-us-east-2` - 🇺🇸 AWS US West (Oregon) — `aws-us-west-2` - 🇩🇪 AWS Europe (Frankfurt) — `aws-eu-central-1` - 🇬🇧 AWS Europe (London) — `aws-eu-west-2` - 🇸🇬 AWS Asia Pacific (Singapore) — `aws-ap-southeast-1` - 🇦🇺 AWS Asia Pacific (Sydney) — `aws-ap-southeast-2` - 🇧🇷 AWS South America (São Paulo) — `aws-sa-east-1` ## Azure regions - 🇺🇸 Azure East US 2 region (Virginia) — `azure-eastus2` - 🇺🇸 Azure West US 3 region (Arizona) — `azure-westus3` - 🇩🇪 Azure Germany West Central region (Frankfurt) — `azure-gwc` **Note** Deployment options on azure: For information about Neon deployment options on Azure, see [Neon on Azure](https://neon.com/docs/manage/azure). ## Request a region ## Select a region for your Neon project You can select the region for your Neon project during project creation. See [Create a project](https://neon.com/docs/manage/projects#create-a-project). All branches and databases created in a Neon project are created in the region selected for the project. **Note**: After you select a region for a Neon project, it cannot be changed for that project. To use a different region, create a new project in your desired region and [move your data to the new project](https://neon.com/docs/introduction/regions#move-project-data-to-a-new-region). ## NAT Gateway IP addresses A NAT gateway has a public IP address that external systems see when private resources initiate outbound connections. Neon uses 3 to 6 IP addresses per region for this outbound communication, corresponding to each availability zone in the region. To ensure proper connectivity for setups such as replicating data to Neon, you should allow access to all the NAT gateway IP addresses associated with your Neon project's region. If you are unsure of your project's region, you can find this information in the **Settings** widget on the **Project Dashboard**. ### AWS NAT Gateway IP Addresses | Region | NAT Gateway IP Addresses | | :------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | AWS US East (N. Virginia) — aws-us-east-1 | 3.222.32.110, 13.219.161.141, 23.23.0.232, 34.202.217.219, 34.233.170.231, 34.235.208.71, 34.239.66.10, 35.168.244.148, 52.73.235.120, 54.88.155.118, 54.160.39.3, 54.205.208.153 | | AWS US East (Ohio) — aws-us-east-2 | 3.16.227.37, 3.128.6.252, 3.129.145.179, 3.139.195.115, 18.217.181.229, 52.15.165.218 | | AWS US West (Oregon) — aws-us-west-2 | 35.83.202.11, 35.164.221.218, 44.235.241.217, 44.236.56.140, 52.32.22.241, 52.37.48.254, 54.213.57.47 | | AWS Europe (Frankfurt) — aws-eu-central-1 | 3.66.63.165, 3.125.57.42, 3.125.234.79, 18.158.63.175, 18.194.181.241, 52.58.17.95 | | AWS Europe (London) — aws-eu-west-2 | 3.10.42.8, 18.133.205.39, 52.56.191.86 | | AWS Asia Pacific (Singapore) — aws-ap-southeast-1 | 54.254.50.26, 54.254.92.70, 54.255.161.23 | | AWS Asia Pacific (Sydney) — aws-ap-southeast-2 | 13.55.152.144, 13.237.134.148, 54.153.185.87 | | AWS South America (São Paulo) — aws-sa-east-1 | 18.230.1.215, 52.67.202.176, 54.232.117.41 | ### Azure NAT Gateway IP Addresses | Region | NAT Gateway IP Addresses | | :----------------------------------------- | :--------------------------------------------- | | Azure East US 2 (Virginia) — azure-eastus2 | 48.211.218.176, 48.211.218.194, 48.211.218.200 | | Azure Germany West Central — azure-gwc | 20.52.100.129, 20.52.100.208, 20.52.187.150 | | Azure West US 3 (Arizona) — azure-westus3 | 20.38.38.171, 20.168.0.32, 20.168.0.77 | ## Move project data to a new region Moving a project to a different region requires moving your data using one of the following options: ### Option 1: Dump and restore Using the dump and restore method involves the following steps: 1. Creating a new project in the desired region. For project creation instructions, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). 1. Moving your data from the old project to the new project. For instructions, see [Import data from Postgres](https://neon.com/docs/import/migrate-from-postgres). Moving data to a new Neon project using this method may take some time depending on the size of your data. To prevent the loss of data during the import operation, consider disabling writes from your applications before initiating the import operation. You can re-enable writes when the import is completed. Neon does not currently support disabling database writes. Writes must be disabled at the application level. ### Option 2: Logical replication As an alternative to the dump and restore method described above, you can use **logical replication** to replicate data from one Neon project to another for a near-zero downtime data migration. For more information, see [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). --- # Source: https://neon.com/llms/introduction-roadmap.txt # Roadmap > The "Roadmap" document outlines Neon's planned features and development timeline, detailing upcoming enhancements and strategic goals to guide users on future updates and improvements within the Neon platform. ## Source - [Roadmap HTML](https://neon.com/docs/introduction/roadmap): The original HTML version of this documentation Our development teams are focused on helping you ship faster with Postgres. This roadmap describes committed features we're working on right now, what we delivered recently, and a peek at what's on the horizon. ## What we're working on now 🛠️ Here's a snapshot of what we're working on now: - **Postgres for AI agents**: [Replit partnered with Neon to back Replit Agents](https://neon.com/blog/looking-at-how-replit-agent-handles-databases), creating thousands of Postgres databases. We're continuing to build out our AI platform support capabilities. If you're building an AI agent platform and would like to integrate agent-ready Postgres, [connect with us](https://neon.com/agent-design-partner) — and checkout our new [Neon for AI Agents](https://neon.com/use-cases/ai-agents) pricing plan. - **Monitoring for billing**: Stay tuned for monitoring enhancements for our new usage-based pricing plans. - **Large object storage**: We're working on adding support for large object storage. - **Console support for data anonymization**: Neon supports the Postgres `anon` extension for [data anonymization](https://neon.com/docs/workflows/data-anonymization). We're bringing that support to the console. Other features you would like to see? [Let us know](https://neon.com/docs/introduction/roadmap#share-your-thoughts). ## What's on the horizon 🌅 And here's an overview of what we're looking at next: ### Backups & restore - Externally exported backups - Integration with external backup systems - Cross-region branch snapshots and exported backups - Cross-cloud branch snapshots and exported backups ### Security - Custom key support for encryption at rest - Customer-managed key (CMK) support for application-level encryption - Kerberos and LDAP authentication support - Mutual TLS connections ### Clouds & regions - AWS and Azure region expansion — let us know where you want to see Neon next: [Request a region](https://neon.com/docs/introduction/regions) - Private Networking on Azure - Google Cloud Platform (GCP) support (targeting late 2025) ### Storage - Increased ingestion speeds - Storage limits up to 200 TB per project ### Compute - Fixed compute sizes up to 128 CUs - Autoscaling up to 60 CUs ### Account security - Role-based access control (RBAC) in the Neon Console - RBAC roles extended into the database - Audit logging of all database access ### Compliance - PCI compliance ### High availability - Cross-availability zone (AZ) highly available compute - Cross-AZ, cross-region, and cross-cloud disaster recovery ## What we've shipped recently 🚢 - **Snapshot scheduling**: Automate snapshots with daily, weekly, or monthly schedules with configurable retention periods. Available on paid plans (excluding the Agent plan). [Learn more](https://neon.com/docs/guides/backup-restore). - **Postgres 18 support (Preview)**: Postgres 18 is now available in preview. Create a new project and select Postgres 18 as your version. [Read the announcement](https://neon.com/blog/postgres-18). - **AI Agent Plan**: An AI agent pricing plan for platforms that need to provision thousands of databases. [Learn more](https://neon.com/use-cases/ai-agents). - **Usage-based pricing plans**: Our paid plans now start at just **$5/month**. Pay only for what you use. See [Neon's New Pricing, Explained: Usage-Based With a $5 Minimum](https://neon.com/blog/new-usage-based-pricing). - **Branch expiration management**: Set a time-to-live (TTL) for Neon branches to simplify branch cleanup and management, see our [branch expiration guide](https://neon.com/docs/guides/branch-expiration). - **Neon Local Connect** — An extension that makes it easy to work with Neon in your local development environment. Available for VS Code, Cursor, Windsurf, and other compatible editors. See [Neon Local Connect Extension](https://marketplace.visualstudio.com/items?itemName=databricks.neon-local-connect). - **TanStack integration & new open-source tools**: Neon is now the official database partner of TanStack, with new open-source tools including a Vite Plugin for Neon to streamline fullstack development with TanStack, Vite, and Postgres. - **Data API**: Neon's Data API feature, powered by PostgREST, is open to all Neon users. [Learn more](https://neon.com/docs/data-api/get-started). - **Monitoring platform support**: Neon supports exporting metrics and Postgres logs to any OpenTelemetry-compatible backend, like New Relic. For details, refer to our [OpenTelemetry docs](https://neon.com/docs/guides/opentelemetry). - **Claimable Databases & Neon Luanchpad**: A new way for SaaS vendors to partner with Neon to offer instant Postgres databases. Let your users create Postgres databases — no registration required. [Learn more about Neon Launchpad](https://neon.com/docs/reference/neon-launchpad), and see our [Claimable database integration guide](https://neon.com/docs/workflows/claimable-database-integration). - **Neon on Azure GA**: We've announced our general availability release on Azure with deeper Azure integration. [Read the announcement](https://neon.com/blog/azure-native-integration-ga). - **Import Data Assistant**: The [Import Data Assistant](https://neon.com/docs/import/import-data-assistant) makes data import easier and faster. - **Data anonymization**: We've added support for the PostgreSQL Anonymizer extension (`anon`). [Learn more](https://neon.com/docs/guides/neon-auth). - **Neon serverless driver GA**: Our JavaScript/TypeScript serverless driver has reached version 1.0.0, bringing stronger SQL injection safeguards and better performance for serverless environments. - **Neon Snapshots (Beta)**: Create and manage point-in-time snapshots of your database with our new unified Backup & Restore experience. - **Inbound logical replication GA**: Neon now fully supports Postgres logical replication for inbound data (replicating data to Neon). - **Postgres logs in Datadog (Beta)**: Stream and analyze your Postgres logs directly in your Datadog dashboard for better observability. - **Support for [pg_search](https://neon.com/docs/extensions/pg_search)**: We partnered with [ParadeDB](https://www.paradedb.com/) to bring `pg_search` to Neon, delivering up to 1,000x faster full-text search inside Postgres on version 17. [Read the announcement](https://neon.com/blog/pgsearch-on-neon). - **MACC-eligibility on Azure**: Neon Postgres purchases made through the Azure Marketplace are now counted toward your Microsoft Azure Consumption Commitment (MACC). [Learn more](https://neon.com/docs/introduction/billing-azure-marketplace#microsoft-azure-consumption-commitment-macc). - **GitHub Secret Scanning**: Neon joined GitHub's Secret Scanning Partner Program to automatically detect and protect against exposed database credentials in public repositories. - **HIPAA compliance**: You can [enable HIPAA compliance](https://neon.com/docs/security/hipaa) on any Neon project. Learn more about Neon's compliance milestones on our [Compliance page](https://neon.com/docs/security/compliance). - **Scheduled updates**: You can now check for update notices and choose preferred update windows for Postgres updates, security patches, and Neon feature enhancements. - **AWS São Paulo region**: Create projects in São Paulo (sa-east-1) for lower latency access from the South America and data residency within Brazil. - **Vercel preview deployment support**: We added support for preview deployments with our **Vercel-Managed Integration**. See the [Vercel-Managed Integration guide](https://neon.com/docs/guides/vercel-managed-integration). - **Manage your database from Cursor, Claude Desktop, or Claude Code**: Manage your Neon database directly from [Cursor](https://neon.com/guides/cursor-mcp-neon), [Claude Desktop](https://neon.com/guides/neon-mcp-server), or [Claude Code](https://neon.com/guides/claude-code-mcp-neon) using natural language, made possible by the [Neon Model Context Protocol (MCP) Server](https://github.com/neondatabase/mcp-server-neon). - **Database Branching for Vercel Preview Environments**: We added support for **database branching for preview environments** to the **Vercel-Managed Integration**, available from the [Vercel Marketplace](https://vercel.com/marketplace). - **AWS London region**: Create projects in London (eu-west-2) for lower latency access from the UK and data residency within the United Kingdom. - **Datadog integration GA**: Monitor your Neon database performance, resource utilization, and system health directly from Datadog's observability platform. - **Save your connection details to [1Password](https://1password.com/)**: See [Save your connection details to 1Password](https://neon.com/docs/connect/connect-from-any-app#save-your-connection-details-to-1password). - **Query monitoring in the console**: Monitor your [active queries](https://neon.com/docs/introduction/monitor-active-queries) and [query performance](https://neon.com/docs/introduction/monitor-query-performance) in the Neon Console. - **Schema-only branches**: Create branches that include only your database schema—ideal for workflows involving sensitive data. This feature is now available in Early Access. [Learn more](https://neon.com/docs/guides/branching-schema-only). - Support for the [postgres_fdw](https://neon.com/docs/extensions/postgres_fdw), [dblink](https://neon.com/docs/extensions/dblink), and [pg_repack](https://neon.com/docs/extensions/pg_repack) Postgres extensions. - **"Instagres": No signup, instant Postgres**: An app that lets you generate a Postgres database URL almost instantly — no sign up required. Give it a try at [https://www.instagres.com/](https://www.instagres.com/) or by running `npx instagres` in your terminal. See how fast Neon can spin up a Postgres database (AI agents loves this, btw). - **Neon Chat for Visual Studio Code**: This AI-powered assistant lets you chat with the latest Neon documentation without leaving your IDE. You can find it here: [Neon Postgres VS Code Extension](https://marketplace.visualstudio.com/items?itemName=buildwithlayer.neon-integration-expert-15j6N). - **A GitHub Copilot extension**: This extension provides chat-based access to the latest Neon documentation directly from your repository. You can find it here: [Neon Postgres Copilot Extension](https://github.com/marketplace/neon-database) - **Schema Diff API**: Neon now supports schema checks in agentic systems and deployment pipelines with the new schema diff API endpoint. Learn more about [Schema Diff](https://neon.com/docs/guides/schema-diff), which is also available via the console and CLI. - **Neon Auth**: Sync user profiles from your auth provider to your database automatically. Includes OAuth provider management - enable or disable providers (Google, GitHub, Microsoft) and choose between shared Neon Auth credentials or custom client credentials. See [Neon Auth](https://neon.com/docs/guides/neon-auth) for details. - **Postgres 17**: Now the default version for all newly created projects. - **Support for [pg_cron](https://neon.com/docs/extensions/pg_cron)**: Schedule and manage periodic jobs directly in your Postgres database with this extension. - **Neon on AgentStack**: Integrate Neon with AgentStack to enable AI agents to create ephemeral or long-lived Postgres instances for structured data storage. Explore the [Neon tool](https://github.com/AgentOps-AI/AgentStack/blob/main/agentstack/_tools/neon/__init__.py) in AgentStack's repo. - **Neon on Composio**: Integrate Neon's API with LLMs and AI agents via Composio. Check out the [Composio integration](https://composio.dev/tools?search=neon). - **Higher connection limits for autoscaling configurations**: Postgres `max_connections` are now much higher. [Learn more](https://neon.com/docs/connect/connection-pooling#connection-limits-without-connection-pooling). - **PgBouncer `default_pool_size` scaling**: The `default_pool_size` is now set according to your compute's `max_connections` setting. Previously, it was fixed at `64`. [Learn more](https://neon.com/docs/connect/connection-pooling#neon-pgbouncer-configuration-settings). - **Neon Auth.js Adapter**: Simplify authentication with the new [Auth.js Neon Adapter](https://authjs.dev/getting-started/adapters/neon). Details: Shipped in 2024 - **Larger computes**: Autoscaling now supports up to 16 vCPUs, and fixed compute sizes up to 56 vCPUs are available in Beta. - **A Model Context Protocol (MCP) server for Neon**: We released an open-source MCP server, enabling AI agents to interact with Neon's API using natural language for tasks like database creation, SQL queries, and migrations. Read the blog post: [Let Claude Manage Your Neon Databases: Our MCP Server is Here](https://neon.com/blog/let-claude-manage-your-neon-databases-our-mcp-server-is-here). - **Neon in the Azure Marketplace**: Neon is now available as an [Azure Native Integration](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/neon1722366567200.neon_serverless_postgres_azure_prod?tab=Overview), enabling developers to deploy Neon Postgres databases directly from the Azure portal. [Read the announcement](https://neon.com/blog/neon-is-now-available-as-an-azure-native-integration). - **Archive storage**: To minimize storage costs, we now support automatic archiving of inactive branches (snapshots of your data) in cost-efficient object storage. For more about this feature, see [Branch archiving](https://neon.com/docs/guides/branch-archiving). - **Organizations GA**: Organization Accounts are now generally available. Create a new organization, transfer over your projects, invite your team and get started collaborating. Refer to our [Organizations docs](https://neon.com/docs/manage/organizations) to learn more. - **Private Networking**: Private and secure network access to your compute resources without traversing public networks. Support for AWS PrivateLink is available in [Public Beta](https://neon.com/docs/guides/neon-private-networking). - **Schema Diff GitHub Action**: This action leverages our [Schema Diff](https://neon.com/docs/guides/schema-diff) feature to compare database schemas across branches and post the differences as a comment on your pull request, streamlining the review process. It's also supported with our [Neon GitHub integration](https://neon.com/docs/guides/neon-github-integration). - **Import Data Assistant**: Helps you migrate data to Neon from other Postgres databases. All you need to get started is a connection string for your existing database. See [Import Data Assistant](https://neon.com/docs/import/import-data-assistant) for instructions. - **Python SDK**: Our new [Python SDK](https://pypi.org/project/neon-api/) wraps the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), allowing you to manage the Neon platform directly from your Python applications. - **Neon in the Vercel Marketplace**: Neon is now a first-party native integration in the Vercel Marketplace. This integration lets Vercel users add Postgres to their projects and manage billing directly through Vercel. For details, see the [Vercel-Managed Integration guide](https://neon.com/docs/guides/vercel-managed-integration). - **Archive storage on the Free plan**: Archive storage is now available on the Free plan for automatically archiving inactive branches. This feature helps minimize storage costs, allowing us to expand the Free plan even further. Learn more in [Branch Archiving](https://neon.com/docs/guides/branch-archiving). - **Neon RLS**: This feature integrates with third-party **authentication providers** like Auth0, Clerk, and Stack Auth to bring authorization to your code base by leveraging Postgres [Row-Level Security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). [Read the announcement](https://neon.com/blog/introducing-neon-authorize) and [check out the docs](https://neon.com/docs/guides/neon-rls). - **Neon on Azure**: You can deploy Neon databases on Azure, starting with the East US 2 region. This marks the first milestone on our Azure roadmap—many more exciting updates are on the way, including deeper integrations with the Azure ecosystem. [Read the announcement](https://neon.com/blog/first-azure-region-available-in-neon). - **End-to-end RAG pipelines in Postgres**: Our new and open source [pgrag](https://neon.com/docs/extensions/pgrag) extension lets you create end-to-end Retrieval-Augmented Generation (RAG) pipelines in Postgres. There's no need for additional programming languages or libraries. With the functions provided by `pgrag`, you can build a complete RAG pipeline directly within your SQL client. - **Support for Analytics with pg_mooncake**: This new extension, brought to the community by [mooncake.dev](https://mooncake.dev/), introduces native columnstore tables with DuckDB execution for _fast_ analytics directly in Postgres. [Read the announcement](https://www.mooncake.dev/blog/pgmooncake-neon). - **Datadog integration**: Scale users can now export Neon metrics to Datadog. - **Deletion of backup branches created by restore operations**: To help minimize storage and keep your Neon project organized, we added support for deleting obsolete backup branches created by [restore](https://neon.com/docs/guides/branch-restore) operations. Previously, these backup branches could not be removed. [Learn more](https://neon.com/docs/guides/branch-restore#deleting-backup-branches). - **Read Replicas on the Free plan**: Read Replicas are now available to all Neon users. [Read the announcement](https://neon.com/blog/create-read-replicas-in-the-free-plan) - **ISO27110 & ISO27701 compliance**: These new certifications add to our growing list of compliance achievements. For more about Neon's compliance milestones, see [Compliance](https://neon.com/docs/security/compliance). - **Increased limits for Neon projects**: We increased the number of projects included in all our paid plans: Launch (100 projects), Scale (1000 projects), and Business (5000 projects). More projects supports use cases such as database-per-tenant and AI agents. [Read the announcement](https://neon.com/blog/thousands-of-neon-projects-now-included-in-your-pricing-plan). - **A new Postgres toolkit for AI agents and test environments**: We recently announced an experimental release of the [@neondatabase/toolkit](https://github.com/neondatabase/toolkit). This toolkit lets you spin up a Postgres database in seconds and run SQL queries. It includes both the [Neon API Client](https://www.npmjs.com/package/@neondatabase/api-client) and the [Neon Serverless Driver](https://github.com/neondatabase/serverless), making it an excellent choice for AI agents that need to quickly set up an SQL database, or for test environments where manually deploying a new database isn't practical. To learn more, see [Why we built @neondatabase/toolkit](https://neon.com/blog/why-neondatabase-toolkit). - **Postgres 17**: You can now run the very latest version of Postgres on Neon. [Read the announcement](https://neon.com/blog/postgres-17). - **SQL Editor AI features**: We added AI features to the Neon SQL Editor, including SQL generation, AI-generated query names, and an AI assistant that will fix your queries. [Learn more](https://neon.com/docs/get-started/query-with-neon-sql-editor#ai-features). - **Data migration support with inbound logical replication**: We've introduced inbound logical replication as the first step toward enabling seamless, low-downtime migrations from your current database provider to Neon. This feature allows you to use Neon as your development environment, taking advantage of developer-friendly tools like branching and our [GitHub integration](https://neon.com/docs/guides/neon-github-integration), even if you keep production with your existing provider. To get started, explore our guides for replicating data from AlloyDB, CloudSQL, and RDS. See [Replicate data to Neon](https://neon.com/docs/guides/logical-replication-guide#replicate-data-to-neon). Inbound logical replication also supports migrating data between Neon projects, useful for version, region, or account migrations. See [Replicate data from one Neon project to another](https://neon.com/docs/guides/logical-replication-neon-to-neon). For more of the latest features and fixes, check our [Changelog](https://neon.com/docs/changelog), published weekly. Or watch for our Changelog email, also sent out weekly. You can also subscribe to updates using our [RSS feed](https://neon.com/docs/changelog/rss.xml). ## Join the Neon Early Access Program Want to try upcoming Neon features before they go live? Join our Early Access Program to preview new features, connect with the Neon team, and help shape the platform's future. Learn more and sign up on the [Early Access Program page](https://neon.com/docs/introduction/early-access). ## A note about timing We are as excited as you are to see new features in Neon, but their development, release, and timing are at our discretion. ## Share your thoughts As always, we are listening. If you see something you like, something you disagree with, or something you'd love for us to add, let us know in our Discord feedback channel. --- # Source: https://neon.com/llms/introduction-scale-to-zero.txt # Scale to Zero > The "Scale to Zero" documentation explains how Neon automatically scales down compute resources to zero during inactivity, optimizing resource usage and cost efficiency for database operations. ## Source - [Scale to Zero HTML](https://neon.com/docs/introduction/scale-to-zero): The original HTML version of this documentation Neon's _Scale to Zero_ feature suspends the Neon compute that runs your Postgres database after a period of inactivity, which minimizes costs for databases that aren't always active, such as development or test environment databases — and even production databases that aren't used 24/7. - When your database is inactive, it automatically scales to zero after 5 minutes. This means you pay only for active time instead of 24/7 compute usage. No manual intervention is required. - Once you query the database again, it reactivates automatically within a few hundred milliseconds. The diagram below illustrates the _Scale to Zero_ behavior alongside Neon's _Autoscaling_ feature. The compute usage line highlights an _inactive_ period, followed by a period where the compute is automatically suspended until it's accessed again. Neon compute scales to zero after an _inactive_ period of 5 minutes. For Neon Free plan users, this setting is fixed. Paid plan users can disable the scale-to-zero setting to maintain an always-active compute. You can enable or disable the scale-to-zero setting by editing your compute settings. For detailed instructions, see [Configuring scale to zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). --- # Source: https://neon.com/llms/introduction-serverless.txt # Serverless > The "Serverless" documentation for Neon explains the architecture and functionality of Neon's serverless PostgreSQL database, detailing how it automatically scales resources based on demand without requiring manual intervention. ## Source - [Serverless HTML](https://neon.com/docs/introduction/serverless): The original HTML version of this documentation Neon takes the world's most loved database — Postgres — and delivers it as a serverless platform, enabling teams to ship reliable and scalable applications faster. Enabling serverless Postgres begins with Neon's [native decoupling of storage and compute](https://neon.com/blog/architecture-decisions-in-neon). By separating these components, Neon can dynamically scale up during periods of high activity and down to zero when idle. Developers can be hands-off instead of sizing infrastructure manually. This serverless character also makes Neon databases highly agile and well-suited for use cases that require automatic creation, management, and deletion of a high number of Postgres databases, like [database-per-user architectures with thousands of tenants](https://neon.com/use-cases/database-per-tenant), as well as [database branching workflows](https://neon.com/branching) that accelerate development by enabling the management of dev/testing databases via CI/CD. Read our [Architecture](https://neon.com/docs/introduction/architecture-overview) section for more information on how Neon is built. ## What "serverless" means to us At Neon, we interpret "serverless" not only as the absence of servers to manage but as a set of principles and features designed to streamline your development process and optimize operational efficiency for your database. To us, serverless means: - **Instant provisioning**: Neon allows you to spin up Postgres databases in seconds, eliminating the long setup times traditionally associated with database provisioning. - **No server management**: You don't have to deal with the complexities of provisioning, maintaining, and administering servers. Neon handles it all, so you can focus on your application. - **Autoscaling**: Compute resources automatically scale up or down based on real-time demand, ensuring optimal performance without manual intervention. No restarts are required. - **Usage-based pricing**: Your costs are directly tied to the resources your workload consumes—both compute and storage. There's no need to over-provision or pay for idle capacity. - **Built-in availability and fault tolerance**: We've designed our architecture for high availability and resilience, ensuring your data is safe and your applications are always accessible. - **Focus on business logic**: With the heavy lifting of infrastructure management handled by Neon, you can dedicate your time and effort to writing code and delivering value to your users. ## To us, serverless does not mean… _That Neon only works with serverless architectures_. Neon is fully compatible with the entire PostgreSQL ecosystem. Whether you're using [Django](https://neon.com/docs/guides/django), [Rails](https://neon.com/docs/guides/ruby-on-rails), or even a bash script in your basement, if it works with Postgres, it works with Neon. _That you have to pay per query_. Your charges are based on compute and storage usage, not the number of queries. For example, you could run billions of queries for as little as $19 per month if they fit within the resources allotted in the [Launch plan](https://neon.com/docs/introduction/plans#launch). The CPU allowance is ample for running sites 24/7 with low CPU requirements. _That you'll get unpredictable costs due to traffic spikes_. We provide transparency in your potential costs. You always set a maximum autoscaling limit to avoid unpredictable bills, and you can always [check your consumption](https://neon.com/docs/introduction/monitor-usage). We send you notifications if your storage usage grows quickly. ## Learn more - [Autoscaling](https://neon.com/docs/introduction/autoscaling) - [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) - [Plans and billing](https://neon.com/docs/introduction/about-billing) - [Database-per-tenant use cases](https://neon.com/use-cases/database-per-tenant) - [Variable workload use cases](https://neon.com/variable-load) - [Postgres for SaaS use cases](https://neon.com/use-cases/postgres-for-saas) --- # Source: https://neon.com/llms/introduction-status.txt # Neon status > The "Neon status" document outlines the current operational status, limitations, and known issues of the Neon database platform, helping users understand its capabilities and any potential constraints they might encounter. ## Source - [Neon status HTML](https://neon.com/docs/introduction/status): The original HTML version of this documentation To stay informed about Neon's status, we provide a dedicated status page for each region that Neon supports. To view the Neon Status page, navigate to [https://neonstatus.com/](https://neonstatus.com/). Remember to bookmark the Neon Status page for easy access. For status information applicable to your Neon project, monitor the status page for the region where your Neon project resides. If you don't know the region, you can find it on the **Project Dashboard** in the Neon Console. Status pages provide status for: - Database Connectivity - Database Operations - Console and API Requests **Note** platform maintenance notices: You can monitor or subscribe to your region's [status page](https://neon.com/docs/manage/platform-maintenance) to stay informed about upcoming platform maintenance. See [Subscribing to Neon status pages](https://neon.com/docs/introduction/status#subscribing-to-neon-status-pages) below. Neon also applies regular updates to your project's computes, but these updates are not posted to regional status pages since they are specific to your Neon project. To stay informed about these updates, watch for update notices in your project's settings in the Neon Console. See [Updates](https://neon.com/docs/manage/updates) for details. ## Subscribing to Neon status pages Follow the instructions from the **Subscribe to updates** link on a regional status page to subscribe to updates via email, RSS, or Slack. ## Access Neon status via API The [Neon status page](https://neonstatus.com), powered by [incident.io](https://incident.io/), is also accessible via API. You can use these endpoints to check the status of the Neon Console and Neon-supported regions. > For more about the incident.io API that supports the Neon status API, including incident.io API rate limits, refer to the [incident.io API docs](https://api-docs.incident.io/). Endpoint responses include the following attributes: ```json { "page_title": "AWS - Europe (Frankfurt) - eu-central-1", "page_url": "https://neonstatus.com/aws-europe-frankfurt", "ongoing_incidents": [], "in_progress_maintenances": [], "scheduled_maintenances": [] } ``` ### Neon status endpoints | Region | Endpoint | | ---------------------------------------- | ------------------------------------------------------------------------ | | Neon Console | https://neonstatus.com/console/api/v1/summary | | Neon Console and all regions | https://neonstatus.com/api/v1/summary | | AWS - Asia Pacific (Singapore) | https://neonstatus.com/aws-asia-pacific-singapore/api/v1/summary | | AWS - Asia Pacific (Sydney) | https://neonstatus.com/aws-asia-pacific-sydney/api/v1/summary | | AWS - Europe (Frankfurt) | https://neonstatus.com/aws-europe-frankfurt/api/v1/summary | | AWS - Europe (London) | https://neonstatus.com/aws-europe-london/api/v1/summary | | AWS - South America (São Paulo) | https://neonstatus.com/aws-south-america-sao-paulo/api/v1/summary | | AWS - US East (N. Virginia) | https://neonstatus.com/aws-us-east-n-virginia/api/v1/summary | | AWS - US East (Ohio) | https://neonstatus.com/aws-us-east-ohio/api/v1/summary | | AWS - US West (Oregon) | https://neonstatus.com/aws-us-west-oregon/api/v1/summary | | Azure - Germany West Central (Frankfurt) | https://neonstatus.com/azure-germanywestcentral-frankfurt/api/v1/summary | | Azure East US 2 (Virginia) | https://neonstatus.com/azure-east-us-2-virginia-eastus-2/api/v1/summary | | Azure West US 3 (Arizona) | https://neonstatus.com/azure-westus3-arizona/api/v1/summary | --- # Source: https://neon.com/llms/introduction-support.txt # Support > The "Support" document outlines the available support channels and resources for Neon users, detailing how to access assistance and resolve issues related to Neon's database services. ## Source - [Support HTML](https://neon.com/docs/introduction/support): The original HTML version of this documentation This page outlines Neon's support plans, available channels, and policies. To learn how to access support, please refer to the [Support channels](https://neon.com/docs/introduction/support#support-channels) section. Identify the channels available to you based on your plan and follow the links to navigate to the relevant information. ## Support plans Neon's support plans are mapped to [Neon Pricing Plans](https://neon.com/docs/introduction/plans), as outlined in the following table. | Neon pricing plan | Support plan options | | :---------------- | :--------------------------------------------------------------------------------------------------- | | Free plan | Community support | | Launch plan | Standard support (billing issues only) | | Scale plan | • Standard support • Business support (additional fee) • Production support (additional fee) | Scale plan customers can upgrade to **Business** or **Production** support plans for an additional fee. In addition to all Standard support plan options, these plans offer a [response time SLA](https://neon.com/docs/introduction/support#response-time-sla). ## Support channels The support channels you can access differ according to your [Support Plan](https://neon.com/docs/introduction/support#support-plans). | Support channels | Community support | Standard support (billing issues only) | Standard support | Business support | Production support | | :---------------------------------------------------------------------------- | :---------------: | :------------------------------------: | :--------------: | :--------------: | :----------------: | | [Neon Discord Server](https://neon.com/docs/introduction/support#neon-discord-server) (not an official channel) | ✓ | ✓ | ✓ | ✓ | ✓ | | [Neon AI Chat Assistance](https://neon.com/docs/introduction/support#neon-ai-chat-assistance) (not an official channel) | ✓ | ✓ | ✓ | ✓ | ✓ | | [Support tickets](https://neon.com/docs/introduction/support#support-tickets) | - | \* | ✓ | ✓ | ✓ | | [Slack channel](https://neon.com/docs/introduction/support#slack-channel) | - | - | \*\* | \*\* | \*\* | | [Dedicated Support Engineer](https://neon.com/docs/introduction/support#dedicated-support-engineer) | - | - | \*\* | \*\* | \*\* | | [SLA](https://neon.com/docs/introduction/support#response-time-sla) | - | - | - | ✓ | ✓ |
    \* [Support tickets](https://neon.com/docs/introduction/support#support-tickets) are only supported for billing-related issues under this support plan, which means Neon Launch plan users can only create support tickets if the issue is related to billing.
    \*\* [Slack channels](https://neon.com/docs/introduction/support#slack-channel) and [Dedicated Support Engineers](https://neon.com/docs/introduction/support#dedicated-support-engineer) are available for an additional fee for Standard, Business, and Production support plans.
    ### Neon Discord Server All Neon users have access to the [Neon Discord Server](https://discord.gg/92vNTzKDGp), where you can ask questions or see what others are doing with Neon. You will find Neon users and members of the Neon team actively engaged. **Important**: The [Neon Discord Server](https://discord.gg/92vNTzKDGp) is not an official Neon Support channel. ### Neon AI chat assistance Neon AI chat assistance is available to all Neon users. You can access it from these locations: - **Neon Console**: Select the **Get help** option from the help menu (`?`) in the Neon Console. - **Neon documentation**: Toggle **Ask Neon AI** on the [Neon documentation](https://neon.com/docs/introduction) site - **Discord**: Join the **#gpt-help** channel on the [Neon Discord server](https://discord.gg/92vNTzKDGp) Neon AI Chat assistants are updated regularly and built on various sources the Neon documentation, the Neon website, the Neon API, and Neon GitHub repositories. **Important**: Neon AI chat is not an official Neon Support channel. ### Support tickets Paying users can raise a support ticket in the Neon Console, via the Neon AI chat assistant, by asking it to create a support ticket. - **Launch** plan users can open support tickets for **billing-related issues only** - **Scale** plan users can open a support ticket for any Neon issue - **Business** and **Production** plan users can open a support ticket for any Neon issue with SLA response times Select **Get help** from the **?** menu at the top of the Neon Console to open the AI chat assistant. Ask your question or describe your issue. If the assistant is unable to resolve the problem, ask it to create a support ticket. ### Slack channel [Slack connect](https://slack.com/intl/en-ie/connect) channels are available for an additional fee for Standard, Business, and Production support plan customers. To learn more, [contact our sales team](https://neon.com/contact-sales). ### Dedicated Support Engineer A dedicated engineer can develop in-depth knowledge of your systems, leading to more efficient issue resolution. This service is available for an additional fee for Standard, Business, and Production support plan customers. To learn more, [contact our sales team](https://neon.com/contact-sales). ### Response time SLA A response time SLA is available to Neon [Scale plan](https://neon.com/docs/introduction/plans) customers who have purchased a **Business** or **Production** support plan. If you are interested in purchasing one of these plans, [please reach out to our sales team](https://neon.com/contact-sales). #### Response times Neon aims to respond to all **Business** and **Production** support plan requests in a timely manner and as soon as practically possible. Customers are prioritized based on their support plan and the [Severity](https://neon.com/docs/introduction/support#severity-levels) of their issue. The table below outlines Neon's response time guidelines for Business and Production support plans. These times relate to the time it takes Neon to respond to the Customer's initial request. This guideline only applies when submitting a support ticket through the Neon Console. | Severity Level | Business support plan | Production support plan | | -------------- | --------------------- | ----------------------- | | Severity 1 | Within 4 hours | Within 1 hour | | Severity 2 | Within 1 business day | Within 4 hours | | Severity 3 | Within 1 business day | Within 1 business day | | Severity 4 | Within 1 business day | Within 1 business day | #### Severity levels When the Customer submits an issue (with or without specifying a starting severity), Neon will reasonably assess its severity according to the appropriate severity levels defined below. Neon reserves the right to set, upgrade and downgrade severities of support tickets, on a case-by-case basis, considering any available mitigations, workarounds, and timely cooperation from Customers. Neon will explain the reasoning to the Customer and will resolve any disagreement regarding the severity as soon as is reasonably practicable. **High severity levels should not be used for low-impact issues or general questions\!** An explanation of each severity level is provided below. • **Severity 1** - Production system is down or severely impacted such that routine operation is impossible • **Severity 2** - Production issue where the system is functioning but in degraded or restricted capacity • **Severity 3** - Issue where minor functionality is impacted or a development issue occurs • **Severity 4** - Request for information or feature request with no impact on business operations ## General support policy Neon provides Support for eligible plans under the terms of this Support Policy as long as the Customer maintains a current subscription to one of the following Neon plans: Launch, Scale. For more information, see [plans](https://neon.com/docs/introduction/plans). "Support" means the services described in this Support Policy and does not include one-time services or other services not specified in this Support Policy, such as training, consulting, or custom development. Support for Free plan users is provided through [Discord](https://neon.com/discord). See Neon [plans](https://neon.com/docs/introduction/plans) and [pricing](https://neon.com/pricing) for more information about our plans. Unless described otherwise, defined terms mentioned in this policy shall have the same meaning as defined in our [terms of service](https://neon.com/terms-of-service). We provide updates regarding any disruption in our Services on our [status page](https://neonstatus.com/). Please check this source first before seeking support. ### Issue resolution Neon will make commercially reasonable efforts to resolve any Issues submitted by customers on eligible plans. Such efforts may (at our discretion) include helping with diagnosis, suggesting workarounds, or changing the Product in a new release. An "Issue" is a material and verifiable failure of the Product to conform to its Documentation. Support will not be provided for the following: (1) use of the Products in a manner inconsistent with the applicable Documentation, (2) modifications to the Products not provided by or approved in writing by Neon, (3) use of the Products with third-party software not provided or approved by Neon. The Customer shall not submit Issues arising from any products other than the Products or otherwise use Support for unsupported products; this includes issues caused by third-party integrations. ### Billing issues If you, the Customer, believe that your invoice or billing receipt is incorrect, we strongly encourage you to contact our Support team rather than filing a dispute with your card provider. Should a payment dispute be filed before getting in touch with us, we are limited in terms of the action we can take to resolve the matter. Once a dispute has been made with the card provider, the account associated with it and all deployments under it may be suspended until it has been resolved. ### Etiquette Regardless of the method or location through which Neon provides Support, communication should be professional and respectful. Any communication that is deemed objectionable by Neon staff is not tolerated. This includes but is not limited to any communication that is abusive or contains profane language. Neon reserves the right to terminate Support Services in the event of any such objectionable communication. ### Customer responsibilities To ensure efficient resolution of issues, customers are expected to (1) provide detailed information about the issue, (2) cooperate with the Support team during troubleshooting, and (3) utilize available self-service resources for basic inquiries. ### Changes to the support policy We reserve the right to modify, amend, or update this Support Policy, including the types of support offered, support hours, response times, and support plans, at any time and at our sole discretion. Any changes to the Support Policy will be effective immediately upon posting a revised version of this Support Policy. Continued use of our services after such modifications will constitute acknowledgment and acceptance of the changes. ## Legacy enterprise support Customers on a legacy [Enterprise plan](https://neon.com/docs/introduction/legacy-plans#enterprise-plan-legacy) can view support plan details in the following dropdown. Details: **Enterprise support (legacy)** ### General support policy Neon provides Support for Enterprise plans under the terms of this Support Policy as long as the Customer maintains a current subscription to an Enterprise plan. For more information, see [legacy plans](https://neon.com/docs/introduction/legacy-plans#enterprise-plan-legacy). "Support" means the services described in this Support Policy and does not include one-time services or other services not specified in this Support Policy, such as training, consulting, or custom development. Unless described otherwise, defined terms mentioned in this policy shall have the same meaning as defined in our [terms of service](https://neon.com/terms-of-service). We provide updates regarding any disruption in our Services on our [status page](https://neonstatus.com/). Please check this source first before seeking support. ### Issue resolution Neon will make commercially reasonable efforts to resolve any Issues submitted by Enterprise customers. Such efforts may (at our discretion) include helping with diagnosis, suggesting workarounds, or changing the Product in a new release. An "Issue" is a material and verifiable failure of the Product to conform to its Documentation. Support will not be provided for the following: (1) use of the Products in a manner inconsistent with the applicable Documentation, (2) modifications to the Products not provided by or approved in writing by Neon, (3) use of the Products with third-party software not provided or approved by Neon. The Customer shall not submit Issues arising from any products other than the Products or otherwise use Support for unsupported products; this includes issues caused by third-party integrations. ### Billing issues If you, the Customer, believe that your invoice or billing receipt is incorrect, we strongly encourage you to contact our Support team rather than filing a dispute with your card provider. Should a payment dispute be filed before getting in touch with us, we are limited in terms of the action we can take to resolve the matter. Once a dispute has been made with the card provider, the account associated with it and all deployments under it may be suspended until it has been resolved. ### Response times Neon aims to respond to all Enterprise subscription requests in a timely manner and as soon as practically possible. Enterprise customers are prioritized based on the Severity of their issue and their specific Enterprise support tier (Standard, Gold, or Platinum). Response times are outlined in the target response time guidelines below. #### Enterprise target response times The table below outlines Neon's guidelines for the various support tiers of our Enterprise support plan. These times relate to the time it takes Neon to respond to the Customer's initial request. This guideline only applies when submitting a support ticket through the Neon Console. | Severity Level | Enterprise Standard | Enterprise Gold | Enterprise Platinum | | --------------------- | ---------------------------------------- | --------------------------------------- | --------------------------------------- | | Severity 1 (Critical) | < 2 hours (during Normal Business Hours) | < 1 hour | < 1 hour | | Severity 2 (High) | < 2 days (during Normal Business Hours) | < 1 day | < 4 hours | | Severity 3 (Normal) | < 3 days (during Normal Business Hours) | < 3 days (during Normal Business Hours) | < 3 days (during Normal Business Hours) | | Severity 4 (Low) | < 3 days (during Normal Business Hours) | < 3 days (during Normal Business Hours) | < 3 days (during Normal Business Hours) | #### Severity levels When the Customer submits an issue (with or without specifying a starting severity), Neon will reasonably assess its severity according to the appropriate severity levels defined below. Neon reserves the right to set, upgrade and downgrade severities of support tickets, on a case-by-case basis, considering any available mitigations, workarounds, and timely cooperation from Customers. Neon will explain the reasoning to the Customer and will resolve any disagreement regarding the severity as soon as is reasonably practicable. Critical and High-priority levels should not be used for low-impact issues or general questions! A detailed explanation of each severity level, including several examples, is provided below. ##### Severity 1 (Critical) Catastrophic problems in the Customer's production system leading to loss of service or impact on the Customer's business: - Unavailability of the service - Security breaches that compromise the confidentiality, integrity, or availability of the database or its data. **Note**: If Critical is selected during the case creation, the customer will be asked to provide in-depth details on the business impact the issue has caused. Examples: - A complete outage of the service provided by Neon - Security breaches - Error impacting the project as a whole (all endpoints/db affected) - Error impacting multiple projects - EP/Branch/DB unreachable - Data corruption/Data loss ##### Severity 2 (High) Means a high-impact problem in a customer's production systems. Essential operations are seriously disrupted, but a workaround exists that allows for continued essential operations. - Non-essential modifications to configuration, like adjusting database parameters or table schema - Minor performance concerns that have minimal impact on database usability - Minor issues related to application integrations, such as minor API connectivity problems - Small-scale challenges with data import/export, data transformation, or data loading processes Examples: - Partial outage of the service provided by Neon: service usable, but key feature unusable, e.g.: - Cannot create a new branch - Cannot execute a branch restore - Cannot perform point-in-time recovery (PITR) - Etc. - Any use case that would require a high load of manual work on the customer side to mitigate an issue on our end - Any use case which massively and negatively affects the customer's business ##### Severity 3 (Normal) A medium-impact problem on a production or non-production system that involves: - Partial or limited loss of non-critical functionality - A usage problem that involves no loss in functionality - Customers can continue essential operations. Normal problems also include issues with non-production systems, such as test and development systems. Examples: - RCA for past outages or incidents (no disruption of the service at the moment) - Sporadic connection failure/timeouts/retries - Cannot connect with random third-party framework or tool (but can connect generally speaking) - Any use case which has a minor impact on the customer's business - Poor performing queries/ingestion - Billing issues ##### Severity 4 (Low) A general usage question; here is no impact on the product's quality, performance, or functionality in a production or non-production system: - Any request for information, enhancement, or documentation clarification regarding the platform Examples: - Feature requests/feature enablement - General questions ("active time," "how to backup a DB," "how to ingest data") and feedback - Any use case that has no impact on the customer's business at all --- # Source: https://neon.com/llms/introduction.txt # Neon documentation > The Neon Docs offer comprehensive guidance on utilizing Neon's cloud-native, serverless PostgreSQL database, detailing setup, configuration, and management processes to optimize database operations. ## Source - [Neon documentation HTML](https://neon.com/docs/introduction): The original HTML version of this documentation Neon is a serverless Postgres platform designed to help you build reliable and scalable applications faster. We separate compute and storage to offer modern developer features such as **autoscaling**, **branching**, **instant restore**, and more. Get started today with our [Free plan](https://console.neon.tech) ## Get started - [Learn the basics](https://neon.com/docs/get-started/signing-up): Sign up for free and learn the basics of database branching with Neon - [Connect Neon to your stack](https://neon.com/docs/get-started/connect-neon): Connect Neon to the platform, language, ORM and other tools in your tech stack - [Branching workflows](https://neon.com/docs/get-started/workflow-primer): Add branching to your CI/CD automation - [Get ready for production](https://neon.com/docs/get-started/production-checklist): Key features to get you production ready ## Quickstarts - [Drizzle](https://neon.com/docs/guides/drizzle): Learn how to use Drizzle ORM with your Neon Postgres database (Drizzle docs) - [React](https://neon.com/docs/guides/react): Build powerful and interactive user interfaces with React using Neon as your database - [Node.js](https://neon.com/docs/guides/node): Quickly add authentication and user management to your Node.js application - [Neon](https://neon.com/docs/serverless/serverless-driver): Connect with the Neon serverless driver - [.NET](https://neon.com/docs/guides/dotnet-npgsql): Connect a .NET (C#) application to Neon - [Next.js](https://neon.com/docs/guides/nextjs): Connect a Next.js application to Neon - [Nuxt](https://neon.com/docs/guides/nuxt): Connect a Nuxt application to Neon - [Astro](https://neon.com/docs/guides/astro): Connect an Astro site or app to Neon - [Django](https://neon.com/docs/guides/django): Connect a Django application to Neon - [Entity Framework](https://neon.com/docs/guides/dotnet-entity-framework): Connect a Dotnet Entity Framework application to Neon - [Elixir](https://neon.com/docs/guides/elixir-ecto): Connect from Elixir with Ecto to Neon - [Go](https://neon.com/docs/guides/go): Connect a Go application to Neon - [Java](https://neon.com/docs/guides/java): Connect a Java application to Neon - [Laravel](https://neon.com/docs/guides/laravel): Connect from Laravel to Neon - [Python](https://neon.com/docs/guides/python): Connect a Python application to Neon - [Quarkus](https://neon.com/docs/guides/quarkus-jdbc): Connect Quarkus (JDBC) to Neon - [Quarkus](https://neon.com/docs/guides/quarkus-reactive): Connect Quarkus (Reactive) to Neon - [Rails](https://neon.com/docs/guides/ruby-on-rails): Connect a Rails application to Neon - [Remix](https://neon.com/docs/guides/remix): Connect a Remix application to Neon - [Rust](https://neon.com/docs/guides/rust): Connect a Rust application to Neon - [SQLAlchemy](https://neon.com/docs/guides/sqlalchemy): Connect an SQLAlchemy application to Neon - [Svelte](https://neon.com/docs/guides/sveltekit): Connect a Sveltekit application to Neon - [Symfony](https://neon.com/docs/guides/symfony): Connect from Symfony with Doctrine to Neon ## Explore the Neon Docs - [Connect](https://neon.com/docs/connect/connect-intro): Learn how to connect to a Serverless Postgres database from any application - [Import data](https://neon.com/docs/import/import-intro): Load your data into a Postgres database hosted by Neon - [AI & embeddings](https://neon.com/docs/ai/ai-intro): Build and scale transformative LLM applications with vector storage and similarity search. - [Branching](https://neon.com/docs/guides/branching-intro): Learn to optimize development workflows with database branching - [Postgres extensions](https://neon.com/docs/extensions/pg-extensions): Level up your database with our many supported Postgres extensions - [Neon CLI Reference](https://neon.com/docs/reference/neon-cli): Manage Neon directly from the terminal with the Neon CLI ## Join the community If you have questions about Neon or Postgres, reach out to Neon community members and developers on our [Discord Server](https://discord.com/invite/92vNTzKDGp). --- # Source: https://neon.com/llms/local-neon-local-connect.txt # Neon Local Connect Extension > The document details the setup and usage of Neon Local Connect extension, enabling users to connect and manage Neon databases directly within VS Code, Cursor, Windsurf, and other compatible editors. ## Source - [Neon Local Connect Extension HTML](https://neon.com/docs/local/neon-local-connect): The original HTML version of this documentation The Neon Local Connect extension lets you connect to any Neon branch using a familiar localhost connection string. Available for VS Code, Cursor, Windsurf, and other VS Code-compatible editors, the underlying Neon Local service handles the routing, authentication, and branch management behind the scenes. Your app connects to `localhost:5432` like a local Postgres instance, but Neon Local routes traffic to your actual Neon branch in the cloud. You can use this connection string in your app: ```env DATABASE_URL="postgres://neon:npg@localhost:5432/" ``` Switch branches, and your app keeps using the same connection string. ## What you can do With the Neon Local Connect extension, you can: - Instantly connect to any Neon branch using a single, static localhost connection string - Create, switch, or reset branches directly from the extension panel - Automate ephemeral branch creation and cleanup, no scripts required - Browse your database schema with an intuitive tree view showing databases, schemas, tables, columns, and relationships - Write and execute SQL queries with syntax highlighting, results display, and export capabilities - View, edit, insert, and delete table data with a spreadsheet-like interface without leaving your IDE - Launch a psql shell in your integrated terminal for direct SQL access All without leaving your editor. Learn more about [branching in Neon](https://neon.com/docs/guides/branching-intro) and [Neon Local](https://neon.com/docs/local/neon-local). ## Requirements - [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed and running - [VS Code 1.85.0+](https://code.visualstudio.com/), [Cursor](https://cursor.sh/), [Windsurf](https://codeium.com/windsurf), or other VS Code-compatible editor - A [Neon account](https://neon.tech) and [API key](https://neon.com/docs/manage/api-keys) (for ephemeral branches only; you can also create new keys from the extension) ## Install the extension The Neon Local Connect extension is available on both marketplaces: **For VS Code:** - Open the [Neon Local Connect extension page](https://marketplace.visualstudio.com/items?itemName=databricks.neon-local-connect) in the VS Code Marketplace. - Click **Install**. **For Cursor, Windsurf, and other VS Code-compatible editors:** - Open the [Neon Local Connect extension page](https://open-vsx.org/extension/databricks/neon-local-connect) in the OpenVSX Registry. - Click **Install** or follow your editor's extension installation process. ## Sign in to Neon - Open the Neon Local Connect panel in the VS Code sidebar and click **Sign in**. - Authenticate with Neon in your browser when prompted. ## Connect to a branch You'll need to make a few selections — organization, project, and then branch — before connecting. If you're new to Neon, this reflects our object hierarchy: organizations contain projects, and projects contain branches. [Learn more about how Neon organizes your data.](https://neon.com/docs/manage/overview) You can connect to two types of branches: - **Existing branch:** For ongoing development, features, or team collaboration. The branch remains available until you delete it. Use this when you want to keep your changes and collaborate with others. - **Ephemeral branch:** For temporary, disposable environments (tests, CI, experiments). The extension creates the branch when you connect and deletes it automatically when you disconnect—no manual cleanup required. In CI or CLI workflows, you'd have to script this yourself. The extension does it for you. As part of choosing your connection, you'll also be asked to choose driver type: **PostgreSQL** for most Postgres connections, or **Neon serverless** for edge/HTTP. [Read more about connection types](https://neon.com/docs/connect/choose-connection). Tab: Existing branch Connect to an existing branch (e.g., `main`, `development`, or a feature branch): Tab: Ephemeral branch Connect to an ephemeral branch (created just for your session): **Note**: Selecting an ephemeral branch will prompt you to create and import API key for authentication. ## Create a new branch Or you can create a new persistent branch for feature development, bug fixes, or collaborative work: 1. Select your organization and project 2. Click **Create new branch...** in the branch dropdown 3. Enter a descriptive branch name (e.g., `feature/user-authentication`, `bugfix/login-validation`) 4. Choose the parent branch you want to branch from (e.g., `production`, `development`) The extension creates the new branch and connects you immediately. This branch persists until you manually delete it. ## Use the static connection string After connecting, find your local connection string in the extension panel. Copy it, update with your database name, and add it to your app's `.env` or config. ```env DATABASE_URL="postgres://neon:npg@localhost:5432/" ``` Your app connects to `localhost:5432`, while the Neon Local service routes the traffic to your actual Neon branch in the cloud. > You only need to set this connection string once, no matter how many times you create, switch, or reset branches. Neon Local handles all the routing behind the scenes, so you never have to update your app config again. ## Start developing Your application now connects to `localhost:5432` using the driver you selected in the extension (Postgres or Neon serverless). See the quickstart for your language or framework for more details. - [Framework quickstarts](https://neon.com/docs/get-started/frameworks) - [Language quickstarts](https://neon.com/docs/get-started/languages) ## Database schema view Once connected, the extension provides a comprehensive **Database Schema** view in the sidebar that lets you explore your database structure visually: ### What you can see: - **Databases**: All available databases in your connected branch - **Schemas**: Database schemas organized in a tree structure - **Tables & Views**: All tables and views with their column definitions - **Data Types**: Column data types, constraints, and relationships - **Primary Keys**: Clearly marked primary key columns - **Foreign Keys**: Visual indicators for foreign key relationships ### What you can do (#what-you-can-do-schema-view) - **Right-click any table** to access quick actions: - **Query Table**: Opens a pre-filled `SELECT *` query in the SQL Editor - **View Table Data**: Opens the table data in an editable spreadsheet view - **Truncate Table**: Remove all rows from a table - **Drop Table**: Delete the table entirely - **Right-click databases** to launch a psql shell for that specific database - **Refresh** the schema view to see the latest structural changes - **Expand/collapse** database objects to focus on what you need The schema view automatically updates when you switch between branches, so you always see the current state of your connected database. ## Built-in SQL Editor Execute SQL queries directly in your IDE with the integrated SQL Editor: ### Features: - **Query Execution**: Run queries with `Ctrl+Enter` or the Execute button - **Results Display**: View query results in a tabular format with: - Column sorting and filtering - Export to CSV/JSON formats - Performance statistics (execution time, rows affected, etc.) - Error highlighting with detailed messages - **Database Context**: Automatically connects to the selected database ### How to use: 1. **From Schema View**: Right-click any table and select "Query Table" for a pre-filled SELECT query 2. **From Actions Panel**: Click "Open SQL Editor" to start with a blank query 3. **From Command Palette**: Use `Ctrl+Shift+P` and search for "Neon: Open SQL Editor" The SQL Editor integrates seamlessly with your database connection, so you can query any database in your current branch without additional setup. ## Table data management View and edit your table data with a powerful, spreadsheet-like interface: ### Viewing data: - **Paginated Display**: Navigate through large datasets with page controls - **Column Management**: Show/hide columns, sort by any column - **Data Types**: Visual indicators for different data types (primary keys, foreign keys, etc.) - **Null Handling**: Clear visualization of NULL values ### Editing capabilities: - **Row Editing**: Click the pen (edit) icon next to any row to edit all fields inline (requires primary key) - **Insert New Rows**: Add new records with the "Add Row" button - **Delete Rows**: Remove records with confirmation dialogs (requires primary key) - **Batch Operations**: Edit multiple fields before saving changes - **Data Validation**: Real-time validation based on column types and constraints > **Note**: Row editing and deletion require tables to have a primary key defined. This ensures data integrity by uniquely identifying rows for safe updates. ### How to access: 1. **From Schema View**: Right-click any table and select "View Table Data" 2. The data opens in a new tab with full editing capabilities 3. Changes are immediately applied to your database 4. Use the refresh button to see updates from other sources Perfect for quick data inspection, testing, and small data modifications without writing SQL. ## Available commands You can run any command by opening the Command Palette (`Cmd+Shift+P` or `Ctrl+Shift+P`) and typing "Neon Local Connect: ...". _All commands below are available under the "Neon Local Connect:" prefix in the Command Palette._ | Command | Description | | ------------------------ | ------------------------------------------------------------------------------------ | | **Import API Key** | Import your Neon API key for authentication. | | **Launch PSQL** | Open a psql shell in your integrated terminal for direct SQL access. | | **Open SQL Editor** | Launch the Neon SQL Editor in your browser for advanced queries and data inspection. | | **Open Table View** | Browse your database schema and data in the Neon Console. | | **Disconnect** | Stop the local proxy connection. | | **Clear Authentication** | Remove stored authentication tokens. | ## Panel actions Once connected, the Neon Local Connect panel provides quick access to common database operations: ### Branch management: - **Reset from Parent Branch:** Instantly revert your branch to match the current state of its parent. Learn more about branch reset in [Docs: Branch Reset](https://neon.com/docs/guides/reset-from-parent). To reset a branch, right-click the branch in the **Database Schema** view and select **Reset from Parent Branch** from the context menu. ### Database tools (available in the main panel): - **Open SQL Editor:** Launch the Neon SQL Editor in your browser for advanced queries - **Open Table View:** Browse your database schema and data in the Neon Console - **Launch PSQL:** Open a psql shell in the integrated terminal for direct SQL access ### Built-in database tools (new in your IDE): - **Database Schema View:** Explore your database structure in the sidebar with expandable tree view - **Built-in SQL Editor:** Write and execute queries directly in your IDE with results display - **Table Data Editor:** View and edit table data with a spreadsheet-like interface - **Context Menus:** Right-click databases, tables, and views for quick actions like querying and data management ## Next steps & resources - [Neon Local documentation](https://neon.com/docs/local/neon-local) - [Branching in Neon](https://neon.com/docs/guides/branching-intro) - [Serverless driver](https://neon.com/docs/serverless/serverless-driver) - [API keys](https://neon.com/docs/manage/api-keys) --- # Source: https://neon.com/llms/local-neon-local.txt # Neon Local > The Neon Local documentation outlines the setup and usage of Neon Local, a tool for running Neon database instances locally, facilitating development and testing environments without requiring cloud deployment. ## Source - [Neon Local HTML](https://neon.com/docs/local/neon-local): The original HTML version of this documentation [Neon Local](https://github.com/neondatabase-labs/neon_local) is a proxy service that creates a local interface to your Neon cloud database. It supports two main use cases: 1. **Connecting to existing Neon branches** - Connect your app to any existing branch in your Neon project 2. **Connecting to ephemeral Neon branches** - Connect your app to a new ephemeral database branch that is instantly created when the Neon Local container starts and deleted when the container stops Your application connects to a local Postgres endpoint, while Neon Local handles routing and authentication to the correct project and branch. This removes the need to update connection strings when working across database branches. ## Connect to existing Neon branch To connect to an existing Neon branch, provide the `BRANCH_ID` environment variable to the container. This allows you to work with a specific branch without creating a new one. ### Docker run ```shell docker run \ --name db \ -p 5432:5432 \ -e NEON_API_KEY= \ -e NEON_PROJECT_ID= \ -e BRANCH_ID= \ neondatabase/neon_local:latest ``` ### Docker Compose ```yaml db: image: neondatabase/neon_local:latest ports: - '5432:5432' environment: NEON_API_KEY: ${NEON_API_KEY} NEON_PROJECT_ID: ${NEON_PROJECT_ID} BRANCH_ID: ${BRANCH_ID} ``` ## Ephemeral database branches for development and testing To create ephemeral branches (default behavior), provide the `PARENT_BRANCH_ID` environment variable instead of `BRANCH_ID`. The Neon Local container automatically creates a new ephemeral branch of your database when the container starts, and deletes it when the container stops. This ensures that each time you deploy your app via Docker Compose, you have a fresh copy of your database — without needing manual cleanup or orchestration scripts. Your database branch lifecycle is tied directly to your Docker environment. ### Docker run ```shell docker run \ --name db \ -p 5432:5432 \ -e NEON_API_KEY= \ -e NEON_PROJECT_ID= \ -e PARENT_BRANCH_ID= \ neondatabase/neon_local:latest ``` ### Docker Compose ```yaml db: image: neondatabase/neon_local:latest ports: - '5432:5432' environment: NEON_API_KEY: ${NEON_API_KEY} NEON_PROJECT_ID: ${NEON_PROJECT_ID} PARENT_BRANCH_ID: ${PARENT_BRANCH_ID} ``` ## Docker run instructions Run the Neon Local container using the following `docker run` command: ```shell docker run \ --name db \ -p 5432:5432 \ -e NEON_API_KEY= \ -e NEON_PROJECT_ID= \ neondatabase/neon_local:latest ``` ## Docker Compose instructions Add Neon Local to your `docker-compose.yml`: ```yaml db: image: neondatabase/neon_local:latest ports: - '5432:5432' environment: NEON_API_KEY: ${NEON_API_KEY} NEON_PROJECT_ID: ${NEON_PROJECT_ID} ``` ## Multi-driver support The Neon Local container now supports both the `postgres` and Neon `serverless` drivers simultaneously through a single connection string. You no longer need to specify a driver or configure different connection strings for different drivers. ## Connecting your app (Postgres driver) Connect to Neon Local using a standard Postgres connection string. ### Docker run ```shell postgres://neon:npg@localhost:5432/?sslmode=require ``` ### Docker compose ```shell postgres://neon:npg@${db}$:5432/?sslmode=require # where {db} is the name of the Neon Local service in your compose file ``` **Note**: For javascript applications The Neon Local container uses an automatically generated self-signed certificate to secure communication between your app and the container. Javascript applications using the `pg`or `postgres` postgres libraries to connect to the Neon Local proxy will also need to add the following configuration to allow your app to connect using the self-signed certificate. ```shell ssl: { rejectUnauthorized: false } ``` ## Connecting your app (Neon serverless driver) Connect using the Neon [serverless driver](https://neon.com/docs/serverless/serverless-driver). **Note**: The Neon Local container only supports HTTP-based communication using the Neon Serverless driver, not websockets. The following configurations will enable your app to communicate using only HTTP traffic with your Neon database. ### Docker run ```javascript import { neon, neonConfig } from '@neondatabase/serverless'; neonConfig.fetchEndpoint = 'http://localhost:5432/sql'; neonConfig.useSecureWebSocket = false; neonConfig.poolQueryViaFetch = true; const sql = neon('postgres://neon:npg@localhost:5432/'); ``` ### Docker compose ```javascript import { neon, neonConfig } from '@neondatabase/serverless'; neonConfig.fetchEndpoint = 'http://{db}:5432/sql'; neonConfig.useSecureWebSocket = false; neonConfig.poolQueryViaFetch = true; const sql = neon('postgres://neon:npg@{db}:5432/'); // where {db} is the name of the Neon Local service in your compose file ``` No additional environment variables are needed - the same Docker configuration works for both drivers: ```shell docker run \ --name db \ -p 5432:5432 \ -e NEON_API_KEY= \ -e NEON_PROJECT_ID= \ neondatabase/neon_local:latest ``` ## Environment variables and configuration options | Variable | Description | Required | Default | | ------------------ | --------------------------------------------------------------------------------- | -------- | ----------------------------- | | `NEON_API_KEY` | Your Neon API key. [Manage API Keys](https://neon.com/docs/manage/api-keys) | Yes | N/A | | `NEON_PROJECT_ID` | Your Neon project ID. Found under Project Settings → General in the Neon console. | Yes | N/A | | `BRANCH_ID` | Connect to an existing Neon branch. Mutually exclusive with `PARENT_BRANCH_ID`. | No | N/A | | `PARENT_BRANCH_ID` | Create ephemeral branch from parent. Mutually exclusive with `BRANCH_ID`. | No | your project's default branch | | `DRIVER` | **Deprecated** - Both drivers now supported simultaneously. | No | N/A | | `DELETE_BRANCH` | Set to `false` to persist branches after container shutdown. | No | `true` | ## Persistent Neon branch per Git branch To persist a branch per Git branch, add the following volume mounts: ```yaml db: image: neondatabase/neon_local:latest ports: - '5432:5432' environment: NEON_API_KEY: ${NEON_API_KEY} NEON_PROJECT_ID: ${NEON_PROJECT_ID} DELETE_BRANCH: false volumes: - ./.neon_local/:/tmp/.neon_local - ./.git/HEAD:/tmp/.git/HEAD:ro,consistent ``` **Note**: This will create a `.neon_local` directory in your project to store metadata. Be sure to add `.neon_local/` to your `.gitignore` to avoid committing database information. ## Git integration using Docker on Mac If using Docker Desktop for Mac, ensure that your VM settings use **gRPC FUSE** instead of **VirtioFS**. There is currently a known bug with VirtioFS that prevents proper branch detection and live updates inside containers. --- # Source: https://neon.com/llms/manage-account-recovery.txt # Account recovery > The "Account Recovery" documentation outlines the procedures for Neon users to regain access to their accounts, detailing steps for resetting passwords and recovering accounts through email verification. ## Source - [Account recovery HTML](https://neon.com/docs/manage/account-recovery): The original HTML version of this documentation If a former employee owned a Neon account and didn't shut it down or transfer access before leaving, you can follow the steps outlined below to recover the account. ## Regain access through the original login method First, determine how the account was accessed. ### A. If the account used a third-party login If the former employee signed up with a third-party identity provider (e.g., Google, GitHub, Microsoft, Hasura), you must recover access to that account through your organization's identity provider. Neon cannot bypass third-party authentication. ### B. If the account used email and password If you have access to the former employee's company email account: 1. Go to the [Neon login page](https://console.neon.tech/login) 2. Click **Forgot Password** 3. Enter the former employee's email address 4. Access the password reset link from their inbox 5. Set a new password and sign in Once signed in, you can: - [Update the email address](https://neon.com/docs/manage/accounts#update-personal-information) - [Transfer project ownership](https://neon.com/docs/manage/orgs-project-transfer) - [Add an admin](https://neon.com/docs/manage/orgs-manage#set-permissions) to your projects or organization - [Update billing details](https://neon.com/docs/introduction/manage-billing) **Note**: For security reasons, we recommend immediately revoking access to company email accounts when employees leave your organization. ## If you cannot access the email or login method If the original login method is inaccessible, we can assist through a manual identity verification process. To begin: 1. Open a [Neon Support ticket](https://console.neon.tech/app/projects?modal=support) from your Neon account. 2. Provide: - A signed statement on company letterhead explaining the situation - Contact details for another employee at your company Neon will: - Notify the email address associated with the account and wait 24 business hours (Mon–Fri) for a response - Send you a document to sign electronically - Schedule a short video call to verify your identity (please have a government-issued ID ready) **Info**: Neon will not store or copy your ID. It's used only to confirm that the person on the call is who they say they are. Once all steps are complete, we'll grant access to the account or transfer project ownership as needed. **Important**: Neon Support may request additional information during or after the verification process. Manual account recovery is a sensitive procedure designed to protect your organization's data and prevent unauthorized access. --- # Source: https://neon.com/llms/manage-accounts.txt # Accounts > The "Accounts" documentation for Neon details the procedures for managing user accounts, including account creation, access control, and permissions within the Neon platform. ## Source - [Accounts HTML](https://neon.com/docs/manage/accounts): The original HTML version of this documentation Your **Neon account** is your personal identity for logging in, managing your profile, and authenticating actions across all organizations you belong to. ## Account settings You can access your Neon account settings from anywhere in the Console. Just click your profile avatar and select **Account settings** from the menu. Here's what you can do from **Account settings**. ## Update personal information Change your name or email address. **If you signed up with email** By default, your email will be used as your first name. You may want to add your first and last name here to complete your profile. Your email is your login and where we'll send all account communications. **If you signed up with Google, GitHub, or another provider** Your name and email come from your social account. Feel free to change your name to whatever works for you. If you change your email, we'll unlink your social account and switch you to email sign-in. After that, you'll use your new email and password to log in. **Changing your email** If you change your email (whether you started with email or social login), you'll get a verification email to confirm. Once confirmed, your new address becomes your login. If you're using a social login and change your email, we'll unlink your social account and switch you to email sign-in. [Change your email in the Neon Console](https://console.neon.tech/app/settings?modal=change_email) ### Need to switch your login method? **If you signed up with Google or GitHub**, you can switch to email login by changing your email and setting a password. **If you signed up with email,** it's not currently possible to switch to a social login. [Contact Support](https://neon.com/docs/introduction/support) and we'll help you out. ## Change password No surprises here — just enter your current password, then your new one (twice). We'll enforce our current password rules for your security. ## Create personal API keys Personal API keys let you securely interact with the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), including through command-line tools, scripts, or third-party integrations that use the API.. Your personal API key works for any organization you belong to — so you can manage projects, automate tasks, or use integrations across all your orgs with a single key. The actions you can perform with your key depend on your role in each org (admin, member, or collaborator). You can create, view, and revoke your personal API keys here in your account settings. [Learn more about API keys](https://neon.com/docs/manage/api-keys) ## Delete account Delete your Neon account after leaving or deleting all orgs and projects. **Leaving an org** If you're the only admin, promote another member to admin first. You can then leave the org. **Deleting an org** Remove all members (so you're the only one left), delete all projects, and you can then delete the org. Once you have no orgs left, you can then click **Delete**. ### What happens after you delete your account - You'll receive a confirmation email. - If you change your mind, you can reactivate your account by logging in again within 30 days. Your personal info will be restored, though not your API keys. - After 30 days, your account and all related data will be permanently deleted. ## Need to recover access to an account? See [Account Recovery](https://neon.com/docs/manage/account-recovery) for step-by-step instructions if you need to regain access to a Neon account. --- # Source: https://neon.com/llms/manage-api-keys.txt # Manage API Keys > The "Manage API Keys" document outlines procedures for creating, managing, and securing API keys within the Neon platform, facilitating controlled access to Neon's services and resources. ## Source - [Manage API Keys HTML](https://neon.com/docs/manage/api-keys): The original HTML version of this documentation Most actions performed in the Neon Console can also be performed using the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). You'll need an API key to validate your requests. Each key is a randomly-generated 64-bit token that you must include when calling Neon API methods. All keys remain valid until deliberately revoked. ## Types of API keys Neon supports three types of API keys: | Key Type | Who Can Create | Scope | Validity | | ---------------------- | --------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------ | | Personal API Key | Any user | All organization projects where the user is a member | Valid until revoked; org project access ends if user leaves organization | | Organization API Key | Organization administrators | All projects within the organization | Valid until revoked | | Project-scoped API Key | Any organization member | Single specified project | Valid until revoked or project leaves organization | While there is no strict limit on the number of API keys you can create, we recommend keeping it under 10,000 per Neon account. ## Creating API keys You'll need to create your first API key from the Neon Console, where you are already authenticated. You can then use that key to generate new keys from the API. **Note**: When creating API keys from the Neon Console, the secret token will be displayed only once. Copy it immediately and store it securely in a credential manager (like AWS Key Management Service or Azure Key Vault) — you won't be able to retrieve it later. If you lose an API key, you'll need to revoke it and create a new one. ### Create a personal API key You can create a personal API key in the Neon Console or using the Neon API. Tab: Console In the Neon Console, select **Account settings** > **API keys**. You'll see a list of any existing keys, along with the button to create a new key. Tab: API You'll need an existing personal key (create one from the Neon Console) in order to create new keys using the API. If you've got a key ready, you can use the following request to generate new keys: ```bash curl https://console.neon.tech/api/v2/api_keys -H "Content-Type: application/json" -H "Authorization: Bearer $PERSONAL_API_KEY" -d '{"key_name": "my-key"}' ``` **Parameters:** - `key_name`: A descriptive name for the API key (e.g., "development", "staging", "ci-pipeline") **Response:** ```json { "id": 177630, "key": "neon_api_key_1234567890abcdef1234567890abcdef" } ``` To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createapikey). ### Create an organization API key Organization API keys provide admin-level access to all organization resources. Only organization admins can create these keys. To create an organization API key, you must use your personal API key and be an administrator in the organization. Neon will verify your admin status before allowing the key creation. For more detail about organization-related methods, see [Organization API Keys](https://neon.com/docs/manage/orgs-api#api-keys). Tab: Console Navigate to your organization's **Settings** > **API keys** to view a list of existing keys and the button to create a new key. Tab: API To create an organization API key via the API, you need to use your personal API key. You also need to have admin-level permissions in the specified organization. ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer $PERSONAL_API_KEY' \ --data '{"key_name": "orgkey"}' ``` **Response:** ```json { "id": 165434, "key": "neon_org_key_1234567890abcdef1234567890abcdef", "name": "orgkey", "created_at": "2022-11-15T20:13:35Z", "created_by": "user_01h84bfr2npa81rn8h8jzz8mx4" } ``` ### Create project-scoped organization API keys Project-scoped API keys have [member-level access](https://neon.com/docs/manage/organizations#user-roles-and-permissions), meaning they **cannot** delete the project they are associated with. These keys: - Can only access and manage the specified project - Cannot perform organization-related actions or create new projects - Will stop working if the project is transferred out of the organization Tab: Console In your organization's **Settings** > **API keys**, click **Create new** and select **Project-scoped** to create a key for your chosen project. Tab: API Any organization member can create an API key for any organization-owned project using the following command: ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer $PERSONAL_API_KEY' \ --data '{"key_name":"only-this-project", "project_id": "some-project-123"}' ``` **Parameters:** - `org_id`: The ID of your organization - `key_name`: A descriptive name for the API key - `project_id`: The ID of the project to which the API key will be scoped **Example Response:** ```json { "id": 1904821, "key": "neon_project_key_1234567890abcdef1234567890abcdef", "name": "test-project-scope", "created_at": "2024-12-11T21:34:58Z", "created_by": "user_01h84bfr2npa81rn8h8jzz8mx4", "project_id": "project-id-123" } ``` ## Make an API call The following example demonstrates how to use your API key to retrieve projects: ```bash curl 'https://console.neon.tech/api/v2/projects' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` where: - `"https://console.neon.tech/api/v2/projects"` is the resource URL, which includes the base URL for the Neon API and the `/projects` endpoint. - The `"Accept: application/json"` in the header specifies the accepted response type. - The `Authorization: Bearer $NEON_API_KEY` entry in the header specifies your API key. Replace `$NEON_API_KEY` with an actual 64-bit API key. A request without this header, or containing an invalid or revoked API key, fails and returns a `401 Unauthorized` HTTP status code. - [`jq`](https://stedolan.github.io/jq/) is an optional third-party tool that formats the JSON response, making it easier to read. Details: Response body For attribute definitions, find the [Retrieve project details](https://api-docs.neon.tech/reference/getproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "projects": [ { "cpu_used_sec": 0, "id": "purple-shape-411361", "platform_id": "aws", "region_id": "aws-us-east-2", "name": "purple-shape-411361", "provisioner": "k8s-pod", "pg_version": 15, "locked": false, "created_at": "2023-01-03T18:22:56Z", "updated_at": "2023-01-03T18:22:56Z", "proxy_host": "us-east-2.aws.neon.tech", "branch_logical_size_limit": 3072 } ] } ``` Refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) for other supported Neon API methods. ## List API keys Tab: Console Navigate to **Account settings** > **API keys** to view your personal API keys, or your organization's **Settings** > **API keys** to view organization API keys. Tab: API For personal API keys: ```bash curl "https://console.neon.tech/api/v2/api_keys" \ -H "Authorization: Bearer $NEON_API_KEY" \ -H "Accept: application/json" | jq ``` For organization API keys: ```bash curl "https://console.neon.tech/api/v2/organizations/{org_id}/api_keys" \ -H "Authorization: Bearer $NEON_API_KEY" \ -H "Accept: application/json" | jq ``` ## Revoke API Keys You should revoke API keys that are no longer needed or if you suspect a key may have been compromised. Key details: - The action is immediate and permanent - All API requests using the revoked key will fail with a 401 Unauthorized error - The key cannot be reactivated — you'll need to create a new key if access is needed again ### Who can revoke keys - Personal API keys can only be revoked by the account owner - Organization API keys can be revoked by organization admins - Project-scoped keys can be revoked by organization admins Tab: Console In the Neon Console, navigate to **Account settings** > **API keys** and click **Revoke** next to the key you want to revoke. The key will be immediately revoked. Any request that uses this key will now fail. Tab: API The following Neon API method revokes the specified API key. The `key_id` is a required parameter: ```bash curl -X DELETE \ 'https://console.neon.tech/api/v2/api_keys/177630' \ -H "Accept: application/json" \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [Revoke API key](https://api-docs.neon.tech/reference/revokeapikey) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "id": 177630, "name": "mykey", "revoked": true, "last_used_at": "2022-12-23T23:38:35Z", "last_used_from_addr": "192.0.2.21" } ``` To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createapikey). --- # Source: https://neon.com/llms/manage-azure.txt # Neon on Azure > The "Neon on Azure" documentation outlines the steps for deploying and managing Neon databases on Microsoft Azure, detailing configuration, setup, and integration processes specific to Azure environments. ## Source - [Neon on Azure HTML](https://neon.com/docs/manage/azure): The original HTML version of this documentation **Important** deprecated: The Neon Azure Native Integration is deprecated and reaches end of life on **January 31, 2026**. After this date, Azure-managed organizations will no longer be available. [Migrate your projects to a Neon-managed organization](https://neon.com/docs/import/migrate-from-azure-native) to continue using Neon. ## Key benefits Deploying Neon natively on Azure lets you manage your Neon organization alongside the rest of your Azure infrastructure. Key benefits include: - **Azure-native management**: Provision and manage Neon organizations directly from the Azure portal. - **Single sign-on (SSO)**: Access Neon using your Azure credentials—no separate logins required. - **Consolidated billing**: Simplify cost management with unified billing through the Azure Marketplace. - **Integrated workflows**: Use the Azure CLI and SDKs to manage Neon as part of your regular workflows, integrated with your existing Azure resources. **Note**: Management of Neon organizations, projects, and branches is supported via the Azure portal, CLI, and SDK. We continue to enhance the integration between Neon and Azure, including with other Azure native services. Additional Neon features and advanced configurations remain accessible through the Neon Console. ### Getting started - [Deploy Neon on Azure](https://neon.com/docs/azure/azure-deploy): Deploy Neon Postgres as Native ISV Service from the Azure Marketplace - [Manage billing on Azure](https://neon.com/docs/introduction/billing-azure-marketplace): Manage billing for the Neon Native ISV Service on Azure - [Manage Neon on Azure](https://neon.com/docs/azure/azure-manage): How to manage your Neon Native ISV Service on Azure - [Develop with Neon on Azure](https://neon.com/docs/azure/azure-develop): Resources for developing with Neon on Azure, including live AI demos --- # Source: https://neon.com/llms/manage-backup-pg-dump-automate.txt # Automate pg_dump backups > The document outlines the process for automating PostgreSQL database backups using the `pg_dump` utility within the Neon environment, detailing the necessary steps and configurations to ensure regular and reliable data preservation. ## Source - [Automate pg_dump backups HTML](https://neon.com/docs/manage/backup-pg-dump-automate): The original HTML version of this documentation Keeping regular backups of your database is critical for protecting against data loss. While Neon offers an [instant restore](https://neon.com/docs/introduction/branch-restore) feature (point-in-time restore) for backups of up to 30 days, there are scenarios—such as business continuity, disaster recovery, or regulatory compliance—where maintaining independent and longer-lived backup files may be necessary. In these cases, using the Postgres `pg_dump` tool to create backups and storing them on a reliable external service (like an AWS S3 bucket) gives you control over long-term retention and recovery of your data. Manually performing backups can be tedious and time consuming, so automation is key to ensure you're taking backups consistently. An automated backup process also lets you enforce retention policies by automatically cleaning up old backups, saving storage, and keeping your backup repository tidy. This two-part guide walks you through setting up an automated backup pipeline using `pg_dump` and GitHub Actions. You will configure everything needed to run nightly backups and store them in S3, ensuring your data is available to restore if needed. - [Part 1: Create an S3 bucket to store backups](https://neon.com/docs/manage/backups-aws-s3-backup-part-1): Set up an AWS S3 bucket for storing backups - [Part 2: Automate with GitHub Actions](https://neon.com/docs/manage/backups-aws-s3-backup-part-2): Schedule nightly backups with GitHub Actions and pg_dump --- # Source: https://neon.com/llms/manage-backup-pg-dump.txt # Backups with pg_dump > The document explains how to use the `pg_dump` utility to create backups of Neon databases, detailing the steps and commands necessary for exporting data effectively. ## Source - [Backups with pg_dump HTML](https://neon.com/docs/manage/backup-pg-dump): The original HTML version of this documentation This topic describes how to create a backup of your Neon database using the Postgres `pg_dump` utility and how to restore a backup using `pg_restore`. **Important**: Avoid using `pg_dump` over a [pooled connection string](https://neon.com/docs/reference/glossary#pooled-connection-string) (see PgBouncer issues [452](https://github.com/pgbouncer/pgbouncer/issues/452) & [976](https://github.com/pgbouncer/pgbouncer/issues/976) for details). Use an [unpooled connection string](https://neon.com/docs/reference/glossary#unpooled-connection-string) instead. ## Prerequisites - Make sure `pg_dump` and `pg_restore` are installed. You can verify by running `pg_dump -V`. - We recommend using the latest versions of `pg_dump` and `pg_restore`, and ensuring that the client version matches your Neon project's Postgres version. ## Install `pg_dump` and `pg_restore` If you don't have the `pg_dump` and `pg_restore` utilities installed locally, you'll need to install them on your preferred platform. Tab: Windows 1. Install PostgreSQL using the official installer from https://www.postgresql.org/download/windows/. 2. `pg_dump` and `pg_restore` are installed by default and can be found in the PostgreSQL `bin` directory. Tab: Mac 1. Install PostgreSQL using Homebrew with the command: brew install postgresql. 2. `pg_dump` and `pg_restore` come with the installation and are available in your `PATH`. Tab: Linux 1. On Ubuntu/Debian, install the PostgreSQL client tools with: `sudo apt-get install postgresql-client`. 2. `pg_dump` and `pg_restore` will be available after installation. Tab: Docker 1. Pull the official PostgreSQL Docker image: `docker pull postgres`. 2. Run the container with: `docker run --name postgres -e POSTGRES_PASSWORD=yourpassword -d -p 5432:5432 postgres`. 3. Verify `pg_dump` is available by running: `docker run --rm postgres pg_dump --version`. ## Creating a backup with `pg_dump` Following this procedure will create a database backup locally, where you're running the `pg_dump` command. 1. Retrieve the connection string for your Neon database by navigating to your Neon **Project Dashboard** and clicking the **Connect** button to open the **Connect to your database** modal. 2. Deselect the **Connection pooling** option. You need a direct connection string, not a pooled one. Your connection string should look something like this: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` 3. Create a backup of your Neon database by running the following `pg_dump` command with your Neon database connection string. ```bash pg_dump -Fc -v -d "" -f ``` After adding your Neon database connection string and a dump file name, your command will look something like this: ```bash pg_dump -Fc -v -d "postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require" -f mydatabase.bak ``` The `pg_dump` command above includes these arguments: - `-Fc`: Sends the output to a custom-format archive suitable for input into `pg_restore`. - `-v`: Runs `pg_dump` in verbose mode, allowing you to monitor what happens during the dump operation. - `-d`: Specifies the [connection string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) for your Neon database. - `-f `: The dump file name. It can be any name you choose (`mydumpfile.bak`, for example). For more command options, see [Advanced pg_dump and pg_restore options](https://neon.com/docs/manage/backup-pg-dump#advanced-pgdump-and-pgrestore-options). ## Restoring a backup with `pg_restore` This procedure shows how to restore a database using the `pg_restore` utility from a backup file created using `pg_dump`, as described above. 1. Create a new Neon project. 2. Create a database with the same name as the one you backed up. The `pg_dump` instructions above created a backup of a database named `neondb`. Your database name is likely different. 3. Retrieve the connection string for your Neon database: Go to your Neon project and click the **Connect** button to open the **Connect to your database** modal. Deselect the **Connection pooling** option. You need a direct connection string, not a pooled one. Your connection string should look something like this: ```bash postgresql://alex:AbC123dEf@ep-dry-morning-a8vn5za2.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require ``` 4. Restore your data to the target database in Neon with `pg_restore`. ```bash pg_restore -v -d "" ``` After adding your Neon database connection string and the dump file name, your command will look something like this: ```bash pg_restore -v -d "postgresql://alex:AbC123dEf@ep-dry-morning-a8vn5za2.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require" mydatabase.bak ``` The example above includes these arguments: - `-v`: Runs `pg_restore` in verbose mode, allowing you to monitor what happens during the restore operation. - `-d`: Specifies the Neon database to connect to. The value is a Neon database connection string. See [Before you begin](https://neon.com/docs/manage/backup-pg-dump#before-you-begin). - `` is the name of the dump file you created with `pg_dump`. For more command options, see [Advanced pg_dump and pg_restore options](https://neon.com/docs/manage/backup-pg-dump#advanced-pgdump-and-pgrestore-options). ## `pg_dump` and `pg_restore` example The following example shows how data is dumped from source database named `neondb` in one Neon project and restored to a `neondb` database in another Neon project using the commands described in the previous sections. (A database named `neondb` was created in the Neon project prior to running the restore operation.) Before performing this procedure: - A new Neon project was created for the destination database, and a database with the same name as the source database was created (`neondb`) - Connection strings for the source and destination databases were collected: - source: `postgresql://neondb_owner:npg_AbC123dEf@ep-dry-morning-a8vn5za2.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require` - destination: `postgresql://neondb_owner:npg_AbC123dEf@ep-dry-morning-a8vn5za2.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require` ```bash ~$ cd mydump ~/mydump$ pg_dump -Fc -v -d "postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require" -f mydatabase.bak ~/mydump$ ls mydatabase.bak ~/mydump$ pg_restore -v -d "postgresql://alex:AbC123dEf@ep-dry-morning-a8vn5za2.us-east-2.aws.neon.tech/neondb?sslmode=require&channel_binding=require" mydatabase.bak ``` --- # Source: https://neon.com/llms/manage-backups-aws-s3-backup-part-1.txt # Create an S3 bucket to store Postgres backups > The document guides Neon users in creating an S3 bucket on AWS to store PostgreSQL backups, detailing the necessary steps and configurations for secure and efficient backup storage. ## Source - [Create an S3 bucket to store Postgres backups HTML](https://neon.com/docs/manage/backups-aws-s3-backup-part-1): The original HTML version of this documentation This guide will walk you through setting up an AWS S3 bucket to store your Postgres backups. **Note**: This is part one of a two-part guide. Continue to part two [here](https://neon.com/docs/manage/backups-aws-s3-backup-part-2). ## Prerequisites To complete this guide, you'll need an AWS user with the appropriate permissions to: - **Manage IAM resources**: Create an OpenID Connect (OIDC) identity provider and IAM roles. - **Manage S3**: Create an S3 bucket and update its bucket policy. These actions typically require AdministratorAccess or a custom IAM policy granting the necessary permissions. If you're not an administrator, ensure your user has IAM management privileges and S3 permissions. ## Setup AWS Providers and Roles There are three parts to the AWS setup, they are: 1. Creating an OIDC Identity Provider 2. Creating a Role 3. Creating an S3 bucket and updating the S3 bucket policy ## Add an Identity provider An OIDC (OpenID Connect) Identity Provider (IdP) in AWS is a third-party service that handles authentication. To allow GitHub Actions to authenticate with AWS, you must add GitHub as an identity provider.. To create a new Identity Provider, navigate to: **IAM** > **Access Management** > **Identity Providers**, and click **Add provider**. On the next screen select OpenID Connect and add the following to the Provider URL and Audience fields. 1. Provider URL: `https://token.actions.githubusercontent.com` 2. Audience: `sts.amazonaws.com` When you're done, click **Add Provider**. You should now see this provider is visible in the list under: **IAM** > **Access Management** > **Identity Providers**. ## Create Role A Role is an identity that you can assume to obtain temporary security credentials for specific tasks or actions within AWS. Roles are used to delegate permissions and grant access to AWS services without the need for credentials like passwords or access keys. To create a new Role, navigate to: **IAM** > **Access Management** > **Roles**, and click **Create role**. On the next screen you can create a **Trusted Identity** for the Role. ## Select Trusted Identity On this screen select **Web Identity**, then select `token.actions.githubusercontent.com` from the **Identity Provider** dropdown menu. Once you select the **Identity Provider**, you'll be shown a number of fields to fill out. Select `sts.amazonaws.com` from the **Audience dropdown** menu, then fill out the GitHub repository details as per your requirements. When you're ready, click **Next**. ## Add Permissions — Skip You can skip selecting anything from this screen and click **Next** to continue. ## Name, review and create On this screen give the **Role** a name and description. You'll use the **Role** name in the code for the GitHub Action. Consider naming this role using specifics as to avoid confusion later down the line. When you're ready click **Create role**. ## Setup AWS S3 bucket There are two parts to creating an S3 bucket, they are: 1. Creating an S3 bucket 2. Updating the bucket policy ## Create S3 bucket AWS S3 (Simple Storage Service) buckets are storage containers used to store objects in Amazon's cloud storage service. An S3 bucket can store any amount of data, from files and documents to images and videos, or in the case of a database backup, a .gz (​​[GNU zip](https://www.gnu.org/software/gzip/)) file. To create a new bucket, navigate to: **S3** > **buckets**, and click **Create bucket**. On the next screen select **General Purpose** for the bucket Type and then give your bucket a name. The most important thing to notice on this screen is the **region** where you're creating the bucket. It's recommended that you deploy your bucket to the same region as your database to minimize latency and reduce backup times. While having an S3 bucket in `us-east-1` and a database in `ap-southeast-1` isn't necessarily a problem, cross-region data transfers can take longer and may introduce additional costs. ## S3 bucket Policy To ensure the **Role** being used in the GitHub Action can perform actions on the S3 bucket, you'll need to update the bucket policy. Select your bucket then select the **Permissions** tab and click **Edit**. You can now add the following policy which grants the **Role** you created earlier access to perform S3 **List**, **Get**, **Put** and **Delete** actions. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::627917386332:role/neon-s3-backup-github-actions" }, "Action": ["s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject"], "Resource": ["arn:aws:s3:::neon-s3-backup", "arn:aws:s3:::neon-s3-backup/*"] } ] } ``` From the snippet above replace the Role name `neon-s3-backup-github-actions` with your **Role** name and replace the S3 bucket name `neon-s3-backup` with your S3 bucket name. When you're ready click **Save changes**. ## Next steps There are a couple of things to note before moving on to the second part of this guide. You'll be creating several GitHub Secrets to hold various values that you likely won't want to expose or repeat in code. These are: - `AWS_ACCOUNT_ID`: This can be found by clicking on your user name in the AWS console. - `S3_BUCKET_NAME`: In this example, this value would be, `neon-s3-backup` - `IAM_ROLE`: In this example this value would be, `neon-s3-backup-github-action` Make a note of these values before you proceed to [part two](https://neon.com/docs/manage/backups-aws-s3-backup-part-2). --- # Source: https://neon.com/llms/manage-backups-aws-s3-backup-part-2.txt # Set up a GitHub Action to perform nightly Postgres backups > This document guides Neon users through setting up a GitHub Action to automate nightly PostgreSQL database backups to AWS S3, detailing the necessary configurations and steps within the GitHub workflow. ## Source - [Set up a GitHub Action to perform nightly Postgres backups HTML](https://neon.com/docs/manage/backups-aws-s3-backup-part-2): The original HTML version of this documentation In this guide, you'll learn how to configure nightly Postgres backups using a scheduled GitHub Action and `pg_dump`. **Note**: This is part two of a two-part guide. Make sure you've completed [part 1](https://neon.com/docs/manage/backups-aws-s3-backup-part-1) first. ## Prerequisites Setting up a scheduled backup involves three key components: ### 1. AWS Requirements - You'll need your **AWS Account ID** and permissions to: - Create **IAM Roles** and **Identity Providers** - Create and manage **S3 buckets** - Update **S3 bucket policies** ### 2. Postgres Database - Ensure you have: - The **connection string** for your database - The **AWS region** where your database is deployed - The **Postgres version** your database is running ### 3. GitHub Action - You'll need **repository access** with permission to manage: - **Actions** and **Settings** > **Secrets and variables** ## Neon project setup Before looking at the code, first take a look at your Neon console dashboard. In our example there is only one project, with a single database named `acme-co-prod`. This database is running Postgres 17 and deployed in the `us-east-1` region. The goal is to backup this database to it's own folder inside an S3 bucket using the same name. ## Scheduled GitHub Action Using the same database naming convention as above, create a new file for the GitHub Action using the following folder structure. ```shell .github |-- workflows |-- acme-co-prod-backup.yml ``` This GitHub Action will run on a recurring schedule and save the backup file to a S3 bucket as defined by the environment variables. Below the code snippet we've explained what each part of the Action does. ```yml name: acme-co-prod-backup on: schedule: - cron: '0 5 * * *' # Runs at midnight EST (us-east-1) workflow_dispatch: jobs: db-backup: runs-on: ubuntu-latest permissions: id-token: write env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_CO_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '17' steps: - name: Install PostgreSQL run: | sudo apt update yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} - name: Set PostgreSQL binary path run: echo "POSTGRES=/usr/lib/postgresql/${{ env.PG_VERSION }}/bin" >> $GITHUB_ENV - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').sql.gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV - name: Create folder if it doesn't exist run: | if ! aws s3api head-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" 2>/dev/null; then aws s3api put-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" fi - name: Run pg_dump run: | $POSTGRES/pg_dump ${{ env.DATABASE_URL }} | gzip > "${{ env.GZIP_NAME }}" - name: Empty bucket of old files run: | THRESHOLD_DATE=$(date -d "-${{ env.RETENTION }} days" +%Y-%m-%dT%H:%M:%SZ) aws s3api list-objects --bucket ${{ env.S3_BUCKET_NAME }} --prefix "${{ env.FOLDER_NAME }}/" --query "Contents[?LastModified<'${THRESHOLD_DATE}'] | [?ends_with(Key, '.gz')].{Key: Key}" --output text | while read -r file; do aws s3 rm "s3://${{ env.S3_BUCKET_NAME }}/${file}" done - name: Upload to bucket run: | aws s3 cp "${{ env.GZIP_NAME }}" "${{ env.UPLOAD_PATH }}" --region ${{ env.AWS_REGION }} ``` ## Action configuration The first part of the GitHub Action specifies the name of the Action and sets the schedule for when it should run. ```yml name: acme-co-prod-backup on: schedule: - cron: '0 5 * * *' # Runs at midnight EST (us-east-1) workflow_dispatch: ``` - `name` : The workflow name and will also be used when creating the folder in the S3 bucket - `cron`: This determines how often the Action will run, take a look a the GitHub docs where the [POSIX cron syntax](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule) is explained - `workflow_dispatch`: This allows you to trigger the workflow manually from the GitHub UI ## Environment variables The next part deals with environment variables. Some variables are set inline in the Action but others are defined using [GitHub Secrets](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository). ```yml env: RETENTION: 7 DATABASE_URL: ${{ secrets.ACME_CO_PROD }} IAM_ROLE: ${{ secrets.IAM_ROLE }} AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }} S3_BUCKET_NAME: ${{ secrets.S3_BUCKET_NAME }} AWS_REGION: 'us-east-1' PG_VERSION: '17' ``` - `RETENTION`: Determines how long a backup file should be retained before it's deleted - `DATABASE_URL`: The Neon Postgres connection string for the database you're backing up - `IAM_ROLE`: The name of the AWS IAM Role - `AWS_ACCOUNT_ID`: Your AWS Account Id - `S3_BUCKET_NAME`: The name of the S3 bucket where all backups are being stored - `AWS_REGION`: The region where the S3 bucket is deployed - `PG_VERSION`: The version of Postgres to install in the GitHub Action environment ## GitHub Secrets As mentioned earlier, several of the environment variables in the Action are defined using GitHub secrets. These secrets can be added to your repository by navigating to **Settings** > **Secrets and variables** > **Actions**. ## Install PostgreSQL This step installs Postgres into the GitHub Action's virtual environment. The version to install is defined by the `PG_VERSION` environment variable. ```yml - name: Install PostgreSQL run: | sudo apt update yes '' | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh sudo apt install -y postgresql-${{ env.PG_VERSION }} ``` ## Set PostgreSQL binary path This step sets the `$POSTGRES` variable, allowing easy access to the Postgres binaries in the GitHub Action's environment. ```yml - name: Set PostgreSQL binary path run: echo "POSTGRES=/usr/lib/postgresql/${{ env.PG_VERSION }}/bin" >> $GITHUB_ENV ``` ## Configure AWS credentials This step configures the AWS credentials, enabling secure interaction between the GitHub Action and AWS services. ```yml - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/${{ env.IAM_ROLE }} aws-region: ${{ env.AWS_REGION }} ``` ## Set file, folder and path variables This step involves setting three variables that are all output to `GITHUB_ENV`. This allows other steps in the Action to access them. ```yml - name: Set file, folder and path variables run: | GZIP_NAME="$(date +'%B-%d-%Y@%H:%M:%S').gz" FOLDER_NAME="${{ github.workflow }}" UPLOAD_PATH="s3://${{ env.S3_BUCKET_NAME }}/${FOLDER_NAME}/${GZIP_NAME}" echo "GZIP_NAME=${GZIP_NAME}" >> $GITHUB_ENV echo "FOLDER_NAME=${FOLDER_NAME}" >> $GITHUB_ENV echo "UPLOAD_PATH=${UPLOAD_PATH}" >> $GITHUB_ENV ``` The three variables are as follows: 1. `GZIP_NAME`: The name of the `.gz` file derived from the date e.g, `February-20-2025@07:53:02.gz` 2. `FOLDER_NAME`: The folder where the `.gz` files are to be uploaded 3. `UPLOAD_PATH`: This is the full path that includes the S3 bucket name, folder name and `.gz` file ## Create folder if it doesn't exist This step creates a new folder (if one doesn't already exist) inside the S3 bucket using the `FOLDER_NAME` as defined in the previous step. ```yml - name: Create folder if it doesn't exist run: | if ! aws s3api head-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" 2>/dev/null; then aws s3api put-object --bucket ${{ env.S3_BUCKET_NAME }} --key "${{ env.FOLDER_NAME }}/" fi ``` ## Run pg_dump This step runs `pg_dump` and saves the output in the Action's virtual memory using the `GZIP_NAME` as defined in the previous step. ```yml - name: Run pg_dump run: | $POSTGRES/$pg_dump ${{ env.DATABASE_URL }} | gzip > "${{ env.GZIP_NAME }}" ``` ## Empty bucket of old files This optional step automatically removes `.gz` files older than the retention period specified by the `RETENTION` variable. It checks the `FOLDER_NAME` directory and deletes outdated backups to save storage space. ```yml - name: Empty bucket of old files run: | THRESHOLD_DATE=$(date -d "-${{ env.RETENTION }} days" +%Y-%m-%dT%H:%M:%SZ) aws s3api list-objects --bucket ${{ env.S3_BUCKET_NAME }} --prefix "${{ env.FOLDER_NAME }}/" --query "Contents[?LastModified<'${THRESHOLD_DATE}'] | [?ends_with(Key, '.gz')].{Key: Key}" --output text | while read -r file; do aws s3 rm "s3://${{ env.S3_BUCKET_NAME }}/${file}" done ``` ## Upload to bucket This step uploads the `.gz` file created by the `pg_dump` step and uploads it to the correct folder within the S3 bucket. ```yml - name: Upload to bucket run: | aws s3 cp "${{ env.GZIP_NAME }}" "${{ env.UPLOAD_PATH }}" --region ${{ env.AWS_REGION }} ``` ## Finished After committing and pushing the workflow to your GitHub repository, the Action will automatically run on the specified schedule, ensuring your Postgres backups are performed regularly. ## Restoring from a backup Restoring a `pg_dump` backup requires downloading the file from S3 and restoring it using `pg_restore`. For instructions, see [Restoring a backup with pg_restore](https://neon.com/docs/manage/backup-pg-dump#restoring-a-backup-with-pgrestore). --- # Source: https://neon.com/llms/manage-backups.txt # Backups > The "Backups" documentation outlines the procedures for managing and configuring backups in Neon, detailing how to create, schedule, and restore backups to ensure data integrity and availability. ## Source - [Backups HTML](https://neon.com/docs/manage/backups): The original HTML version of this documentation What you will learn: - About backup strategies - About built-in backups with instant restore - Creating and automating backups using pg_dump Related resources: - [Instant restore](https://neon.com/docs/introduction/branch-restore) Neon supports different backup strategies, which you can use separately or in combination, depending on your requirements. ## Instant restore With Neon's instant restore capability, also known as point-in-time restore or PITR, you can automatically retain a "history" of changes—ranging from 1 day up to 30 days, depending on your Neon plan. This feature lets you restore your database to any specific moment without the need for traditional database backups or separate backup automation. It's ideal if your primary concern is fast recovery after an unexpected event. With this strategy, the only required action is setting your desired restore window. Please keep in mind that increasing your restore window also increases storage, as changes to your data are retained for a longer period. To get started, see [Instant restore](https://neon.com/docs/introduction/branch-restore). ## Backups with `pg_dump` For business continuity, disaster recovery, or compliance, you can use standard Postgres tools to back up and restore your database. Neon supports traditional backup workflows using `pg_dump` and `pg_restore`. To learn how, see [Backups with pg_dump](https://neon.com/docs/manage/backup-pg-dump). ## Automated backups with `pg_dump` If you need to automate `pg_dump` backups to remote storage, we provide a two-part guide that walks you through setting up an S3 bucket and a GitHub Action to automate `pg_dump` backups on a recurring schedule. You'll also learn how to configure retention settings to manage how long `pg_dump` backups are stored before being deleted. 1. [Create an S3 bucket to store Postgres backups](https://neon.com/docs/manage/backups-aws-s3-backup-part-1) 2. [Set up a GitHub Action to perform nightly Postgres backups](https://neon.com/docs/manage/backups-aws-s3-backup-part-2) **Note** Backup & Restore Questions?: If you have questions about backups, please reach out to [Neon Support](https://console.neon.tech/app/projects?modal=support). --- # Source: https://neon.com/llms/manage-branches.txt # Manage branches > The "Manage branches" documentation outlines the procedures for creating, managing, and deleting branches within the Neon database platform, facilitating efficient database version control and organization. ## Source - [Manage branches HTML](https://neon.com/docs/manage/branches): The original HTML version of this documentation Data resides in a branch. Each Neon project is created with a [root branch](https://neon.com/docs/manage/branches#root-branch) called `production`, which is also designated as your [default branch](https://neon.com/docs/manage/branches#default-branch). You can create child branches from `production` or from previously created branches. A branch can contain multiple databases and roles. Neon's [plan allowances](https://neon.com/docs/introduction/plans) define the number of branches you can create. A child branch is a copy-on-write clone of the parent branch. You can modify the data in a branch without affecting the data in the parent branch. For more information about branches and how you can use them in your development workflows, see [Branching](https://neon.com/docs/introduction/branching). You can create and manage branches using the Neon Console, [Neon CLI](https://neon.com/docs/reference/neon-cli), or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Important**: When working with branches, it is important to remove old and unused branches. Branches hold a lock on the data they contain, which will add to your storage usage as they age out of your project's [restore window](https://neon.com/docs/introduction/branching#restore-window). ## Create a branch To create a branch: 1. In the Neon Console, select a project. 2. Select **Branches**. 3. Click **New branch** to open the branch creation dialog. 4. Specify a branch name. 5. Select a **branch setup** option. If you're interested in schema-only branches, see [Schema-only branches](https://neon.com/docs/guides/branching-schema-only). **Note**: When creating a branch with past data, you can only specify a date and time that falls within your [restore window](https://neon.com/docs/manage/projects#configure-restore-window). 6. By default, **Automatically delete branch after** is checked with 1 day selected to help prevent unused branches from accumulating. You can choose 1 hour, 1 day, or 7 days, or uncheck to disable. This is useful for CI/CD pipelines and short-lived development environments. Note: This default only applies when creating branches through the Console; API and CLI branches have no expiration by default. Refer to our [Branch expiration guide](https://neon.com/docs/guides/branch-expiration) for details. 7. Click **Create new branch**. You are presented with the connection details for your new branch and directed to the **Branch** overview page where you are shown the details for your new branch. **Note** Postgres role passwords on branches: When creating a new branch, the branch will have the same Postgres roles and passwords as the parent branch. If you want your branch created with new role passwords, you can enable [branch protection](https://neon.com/docs/guides/protected-branches). ## View branches To view the branches in a Neon project: 1. In the Neon Console, select a project. 1. Select **Branches** to view all current branches in the project. Branch details in this table view include: - **Branch**: The branch name, which is a generated name if no name was specified when created. - **Parent**: Indicates the parent from which this branch was created, helping you track your branch hierarchy. - **Compute hours**: Number of hours the branch's compute was active so far in the current billing period. - **Primary compute**: Shows the current compute size and status for the branch's compute. - **Data size**: Indicates the logical data size of the branch, helping you monitor your plan's storage limit. Data size does not include history. - **Created by**: The account or integration that created the branch. - **Last active**: Shows when the branch's compute was last active. 1. Select a branch from the table to view details about the branch. Branch details shown on the branch page may include: - **Archive status**: This only appears if the branch was archived. For more, see [Branch archiving](https://neon.com/docs/guides/branch-archiving). - **ID**: The branch ID. Branch IDs have a `br-` prefix. - **Created on**: The date and time the branch was created. - **Compute hours**: The compute hours used by the default branch in the current billing period. - **Data size**: The logical data size of the branch. Data size does not include history. - **Parent branch**: The branch from which this branch was created (only applicable to child branches). The branch details page also includes details about the **Computes**, **Roles & Databases**, and **Child branches** that belong to the branch. All of these objects are associated with a particular branch. For information about these objects, see: - [Manage computes](https://neon.com/docs/manage/computes#view-a-compute). - [Manage roles](https://neon.com/docs/manage/roles) - [Manage databases](https://neon.com/docs/manage/databases) - [View branches](https://neon.com/docs/manage/branches#view-branches) ## Branch archiving On the Free plan, Neon automatically archives inactive branches to cost-efficient archive storage after a defined threshold. For more, see [Branch archiving](https://neon.com/docs/guides/branch-archiving). **Note**: For branches with predictable lifespans, you can set an expiration date when creating branches to automatically delete them at a specified time. This offers an alternative to archiving for temporary development and testing environments, ensuring cleanup happens exactly when needed. ## Rename a branch Neon permits renaming a branch, including your project's default branch. To rename a branch: 1. In the Neon Console, select a project. 2. Select **Branches** to view the branches for the project. 3. Select a branch from the table. 4. On the branch overview page, click the **More** drop-down menu and select **Rename**. 5. Specify a new name for the branch and click **Save**. ## Set a branch as default Each Neon project is created with a default branch called `production`, but you can designate any branch as your project's default branch. The default branch serves two key purposes: - For users on paid plans, the compute associated with the default branch is exempt from the [concurrently active compute limit](https://neon.com/docs/reference/glossary#concurrently-active-compute-limit), ensuring that it is always available. - The [Neon-Managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration) creates preview deployment branches from your Neon project's default branch. For more information, see [Default branch](https://neon.com/docs/manage/branches#default-branch). To set a branch as the default branch: 1. In the Neon Console, select a project. 2. Select **Branches** to view the branches for the project. 3. Select a branch from the table. 4. On the branch overview page, click the **More** drop-down menu and select **Set as default**. 5. In the **Set as default** confirmation dialog, click **Set as default** to confirm your selection. ## Set a branch as protected This feature is available on all Neon's paid plans, which supports up to five protected branches. To set a branch as protected: 1. In the Neon Console, select a project. 2. Select **Branches** to view the branches for the project. 3. Select a branch from the table. 4. On the branch overview page, click the **More** drop-down menu and select **Set as protected**. 5. In the **Set as protected** confirmation dialog, click **Set as protected** to confirm your selection. For details and configuration instructions, refer to our [Protected branches guide](https://neon.com/docs/guides/protected-branches). ## Set a branch expiration To set or update a branch's expiration (auto-deletion TTL): 1. In the Neon Console, select a project. 2. Select **Branches** to view the branches for the project. 3. Select a branch from the table. 4. On the branch overview page, click the **Actions** drop-down menu and select **Edit expiration**. 5. Set a new expiration date and time, or toggle off "Automatically delete branch after" to remove expiration. 6. Click **Save**. For details and configuration instructions, refer to our [Branch expiration guide](https://neon.com/docs/guides/branch-expiration). ## Connect to a branch Connecting to a database in a branch requires connecting via a compute associated with the branch. The following steps describe how to connect using `psql` and a connection string obtained from the Neon Console. **Tip**: You can also query the databases in a branch from the Neon SQL Editor. For instructions, see [Query with Neon's SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). 1. In the Neon Console, select a project. 2. Find the connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Select the branch, the database, and the role you want to connect with. 3. Copy the connection string. A connection string includes your role name, the compute hostname, and database name. 4. Connect with `psql` as shown below. ```bash psql postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` **Tip**: A compute hostname starts with an `ep-` prefix. You can also find a compute hostname on the **Branches** page in the Neon Console. See [View branches](https://neon.com/docs/manage/branches#view-branches). If you want to connect from an application, the **Connect to your database modal**, accessed by clicking **Connect** on the project **Dashboard**, and the [Frameworks](https://neon.com/docs/get-started/frameworks) and [Languages](https://neon.com/docs/get-started/languages) sections in the documentation provide various connection examples. ## Reset a branch from parent You can use Neon's **Reset from parent** feature to instantly update a branch with the latest schema and data from its parent. This feature can be an integral part of your CI/CD automation. You can use the Neon Console, CLI, or API. For details, see [Reset from parent](https://neon.com/docs/guides/reset-from-parent). ## Restore a branch to its own or another branch's history There are several restore operations available using Neon's instant restore feature: - Restore a branch to its own history - Restore a branch to the head of another branch - Restore a branch to the history of another branch You can use the Neon Console, CLI, or API. For more details, see [Instant restore](https://neon.com/docs/guides/branch-restore). ## Delete a branch Deleting a branch is a permanent action. Deleting a branch also deletes the databases and roles that belong to the branch as well as the compute associated with the branch. You cannot delete a branch that has child branches. The child branches must be deleted first. To delete a branch: 1. In the Neon Console, select a project. 2. Select **Branches**. 3. Select a branch from the table. 4. On the branch overview page, click the **More** drop-down menu and select **Delete**. 5. On the confirmation dialog, click **Delete**. **Tip**: For temporary branches, consider setting an expiration date when creating them to automate cleanup and reduce manual deletion overhead. ## Check the data size You can check the logical data size for the databases on a branch by viewing the **Data size** value on the **Branches** page or page in the Neon Console. Alternatively, you can run the following query on your branch from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any SQL client connected to your database: ```sql SELECT pg_size_pretty(sum(pg_database_size(datname))) FROM pg_database; ``` The query value may differ slightly from the **Data size** reported in the Neon Console. Data size is your logical data size. ## Branch types Neon has different branch types with different characteristics. ### Root branch A root branch is a branch without a parent branch. Each Neon project starts with a root branch named `production`, which cannot be deleted and is set as the [default branch](https://neon.com/docs/manage/branches#default-branch) for the project. Neon also supports two other types of root branches that have no parent but _can_ be deleted: - [Backup branches](https://neon.com/docs/manage/branches#backup-branch), created by instant restore operations on other root branches. - [Schema-only branches](https://neon.com/docs/manage/branches#schema-only-branch). The number of root branches allowed in a project depends on your Neon plan. | Plan | Root branch allowance per project | | :----- | :-------------------------------- | | Free | 3 | | Launch | 5 | | Scale | 25 | ### Default branch Each Neon project has a default branch. In the Neon Console, your default branch is identified by a `DEFAULT` tag. You can designate any branch as the default branch for your project. The default branch serves two key purposes: - For users on paid plans, the compute associated with the default branch is exempt from the [concurrently active compute limit](https://neon.com/docs/reference/glossary#concurrently-active-compute-limit), ensuring that it is always available. - The [Neon-Managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration) creates preview deployment branches from your Neon project's default branch. ### Non-default branch Any branch not designated as the default branch is considered a non-default branch. You can rename or delete non-default branches. - For Neon Free plan users, computes associated with **non-default branches** are suspended if you exceed the Neon Free plan 5 hours per month for **non-default branches**. - For users on paid plans, default limits prevent more than 20 concurrently active computes. Beyond that limit, additional computes will remain suspended. ### Protected branch Neon's protected branches feature implements a series of protections: - Protected branches cannot be deleted. - Protected branches cannot be [reset](https://neon.com/docs/manage/branches#reset-a-branch-from-parent). - Projects with protected branches cannot be deleted. - Computes associated with a protected branch cannot be deleted. - New passwords are automatically generated for Postgres roles on branches created from protected branches. [See below](https://neon.com/docs/manage/branches#new-passwords-generated-for-postgres-roles-on-child-branches). - With additional configuration steps, you can apply IP Allow restrictions to protected branches only. See [below](https://neon.com/docs/manage/branches#how-to-apply-ip-restrictions-to-protected-branches). - Protected branches are not [archived](https://neon.com/docs/guides/branch-archiving) due to inactivity. Typically, a protected status is given to a branch or branches that hold production data or sensitive data. The protected branch feature is only supported on Neon's paid plans. See [Set a branch as protected](https://neon.com/docs/manage/branches#set-a-branch-as-protected). ### Schema-only branch A branch that replicates only the database schema from a source branch, without copying any of the actual data. This feature is particularly valuable when working with sensitive information. Rather than creating branches that include confidential data, you can duplicate just the database structure and then populate it with your own data. Schema-only branches are [root branches](https://neon.com/docs/manage/branches#root-branch), meaning they have no parent. As a root branch, each schema-only branch starts an independent line of data in a Neon project. See [Schema-only branches](https://neon.com/docs/guides/branching-schema-only). ### Backup branch A branch created by an [instant restore](https://neon.com/docs/manage/branches#branch-restore) operation. When you restore a branch from a particular point in time, the current branch is saved as a backup branch. Performing a restore operation on a root branch, creates a backup branch without a parent branch (a root branch). See [Instant restore](https://neon.com/docs/guides/branch-restore). ### Branch with expiration A branch with an expiration timestamp is automatically deleted when the expiration time is reached. Any branch can have an expiration timestamp added or removed at any time. This feature is particularly useful for temporary development and testing environments. ## Branching with the Neon CLI The Neon CLI supports creating and managing branches. For instructions, see [Neon CLI commands — branches](https://neon.com/docs/reference/cli-branches). For a Neon CLI branching guide, see [Branching with the Neon CLI](https://neon.com/docs/reference/cli-branches). ## Branching with the Neon API Branch actions performed in the Neon Console can also be performed using the Neon API. The following examples demonstrate how to create, view, and delete branches using the Neon API. For other branch-related API methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Note**: The API examples that follow may not show all of the user-configurable request body attributes that are available to you. To view all of the attributes for a particular method, refer to the method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` option specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ### Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the examples shown below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. **Note**: To learn more about the types of API keys you can create — personal, organization, or project-scoped — see [Manage API Keys](https://neon.com/docs/manage/api-keys). ### Create a branch with the API The following Neon API method creates a branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createprojectbranch). ```http POST /projects/{project_id}/branches ``` The API method appears as follows when specified in a cURL command. The `endpoints` attribute creates a compute, which is required to connect to the branch. A branch can be created with or without a compute. The `branch` attribute specifies the parent branch. **Note**: This method does not require a request body. Without a request body, the method creates a branch from the project's default branch, and a compute is not created. ```bash curl 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoints": [ { "type": "read_write" } ], "branch": { "parent_id": "br-wispy-dew-591433" } }' | jq ``` - The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. - The `parent_id` can be obtained by listing the branches for your project. See [List branches](https://neon.com/docs/manage/branches#list-branches-with-the-api). The `` is the `id` of the branch you are branching from. A branch `id` has a `br-` prefix. You can branch from your Neon project's default branch or a previously created branch. The response body includes information about the branch, the branch's compute, and the `create_branch` and `start_compute` operations that were initiated. Details: Response body For attribute definitions, find the [Create branch](https://api-docs.neon.tech/reference/createprojectbranch) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "branch": { "id": "br-curly-wave-af4i4oeu", "project_id": "dry-heart-13671059", "parent_id": "br-morning-meadow-afu2s1jl", "parent_lsn": "0/1FA22C0", "name": "br-curly-wave-af4i4oeu", "current_state": "init", "pending_state": "ready", "state_changed_at": "2025-08-04T07:13:09Z", "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:13:09Z", "created_by": { "name": "your@email.com", "image": "" }, "init_source": "parent-data" }, "endpoints": [ { "host": "ep-cool-darkness-123456.c-2.us-west-2.aws.neon.tech", "id": "ep-cool-darkness-123456", "project_id": "dry-heart-13671059", "branch_id": "br-curly-wave-af4i4oeu", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "region_id": "aws-us-west-2", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:13:09Z", "proxy_host": "c-2.us-west-2.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ], "operations": [ { "id": "8289b00a-4341-48d2-b3f1-d0c8dbb7e806", "project_id": "dry-heart-13671059", "branch_id": "br-curly-wave-af4i4oeu", "action": "create_branch", "status": "running", "failures_count": 0, "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:13:09Z", "total_duration_ms": 0 }, { "id": "a3c9baa4-6732-4774-a141-9d03396babce", "project_id": "dry-heart-13671059", "branch_id": "br-curly-wave-af4i4oeu", "endpoint_id": "ep-cool-darkness-123456", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:13:09Z", "total_duration_ms": 0 } ], "roles": [ { "branch_id": "br-curly-wave-af4i4oeu", "name": "alex", "protected": false, "created_at": "2025-08-04T07:07:55Z", "updated_at": "2025-08-04T07:07:55Z" } ], "databases": [ { "id": 2886327, "branch_id": "br-curly-wave-af4i4oeu", "name": "dbname", "owner_name": "alex", "created_at": "2025-08-04T07:07:55Z", "updated_at": "2025-08-04T07:07:55Z" } ], "connection_uris": [ { "connection_uri": "postgresql://alex:AbC123dEf@ep-cool-darkness-123456.c-2.us-west-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require", "connection_parameters": { "database": "dbname", "password": "AbC123dEf", "role": "alex", "host": "ep-cool-darkness-123456.c-2.us-west-2.aws.neon.tech", "pooler_host": "ep-cool-darkness-123456-pooler.c-2.us-west-2.aws.neon.tech" } } ] } ``` ### List branches with the API The following Neon API method lists branches for the specified project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojectbranches). ```http GET /projects/{project_id}/branches ``` The API method appears as follows when specified in a cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. The response body lists the project's default branch and any child branches. The name of the default branch in this example is `main`. Details: Response body For attribute definitions, find the [List branches](https://api-docs.neon.tech/reference/listprojectbranches) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "branches": [ { "id": "br-curly-wave-af4i4oeu", "project_id": "dry-heart-13671059", "parent_id": "br-morning-meadow-afu2s1jl", "parent_lsn": "0/1FA22C0", "parent_timestamp": "2025-08-04T07:08:48Z", "name": "br-curly-wave-af4i4oeu", "current_state": "ready", "state_changed_at": "2025-08-04T07:13:09Z", "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:18:15Z", "created_by": { "name": "your@email.com", "image": "" }, "init_source": "parent-data" }, { "id": "br-morning-meadow-afu2s1jl", "project_id": "dry-heart-13671059", "name": "main", "current_state": "ready", "state_changed_at": "2025-08-04T07:07:58Z", "logical_size": 30777344, "creation_source": "console", "primary": true, "default": true, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-08-04T07:07:55Z", "updated_at": "2025-08-04T07:13:11Z", "created_by": { "name": "your@email.com", "image": "" }, "init_source": "parent-data" } ], "annotations": {}, "pagination": { "sort_by": "updated_at", "sort_order": "DESC" } } ``` ### Delete a branch with the API The following Neon API method deletes the specified branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteprojectbranch). ```http DELETE /projects/{project_id}/branches/{branch_id} ``` The API method appears as follows when specified in a cURL command: ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-curly-wave-af4i4oeu' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` - The `project_id` for a Neon project is found on the **Settings** page in the Neon Console, or you can find it by listing the projects for your Neon account using the Neon API. - The `branch_id` can be found by listing the branches for your project. The `` is the `id` of a branch. A branch `id` has a `br-` prefix. See [List branches](https://neon.com/docs/manage/branches#list-branches-with-the-api). The response body shows information about the branch being deleted and the `suspend_compute` and `delete_timeline` operations that were initiated. Details: Response body For attribute definitions, find the [Delete branches](https://api-docs.neon.tech/reference/deleteprojectbranch) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "branch": { "id": "br-curly-wave-af4i4oeu", "project_id": "dry-heart-13671059", "parent_id": "br-morning-meadow-afu2s1jl", "parent_lsn": "0/1FA22C0", "parent_timestamp": "2025-08-04T07:08:48Z", "name": "br-curly-wave-af4i4oeu", "current_state": "ready", "pending_state": "storage_deleted", "state_changed_at": "2025-08-04T07:13:09Z", "logical_size": 30851072, "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-08-04T07:13:09Z", "updated_at": "2025-08-04T07:21:55Z", "created_by": { "name": "your@email.com", "image": "" }, "init_source": "parent-data" }, "operations": [ { "id": "eb85073d-53fc-4d37-a32a-ca9e9ea1eeb1", "project_id": "dry-heart-13671059", "branch_id": "br-curly-wave-af4i4oeu", "endpoint_id": "ep-soft-art-af5jvg5j", "action": "suspend_compute", "status": "running", "failures_count": 0, "created_at": "2025-08-04T07:21:55Z", "updated_at": "2025-08-04T07:21:55Z", "total_duration_ms": 0 }, { "id": "586af342-1ffe-4e0a-9e11-326db1164ad7", "project_id": "dry-heart-13671059", "branch_id": "br-curly-wave-af4i4oeu", "action": "delete_timeline", "status": "scheduling", "failures_count": 0, "created_at": "2025-08-04T07:21:55Z", "updated_at": "2025-08-04T07:21:55Z", "total_duration_ms": 0 } ] } ``` You can verify that a branch is deleted by listing the branches for your project. See [List branches](https://neon.com/docs/manage/branches#list-branches-with-the-api). The deleted branch should no longer be listed. --- # Source: https://neon.com/llms/manage-computes.txt # Manage computes > The "Manage Computes" documentation for Neon outlines procedures for creating, configuring, and managing compute resources within the Neon platform, detailing steps for optimizing compute performance and resource allocation. ## Source - [Manage computes HTML](https://neon.com/docs/manage/computes): The original HTML version of this documentation A compute is a virtualized service that runs applications. In Neon, a compute runs Postgres. Each project has a primary read-write compute for its [default branch](https://neon.com/docs/reference/glossary#default-branch). Neon supports both read-write and [read replica](https://neon.com/docs/introduction/read-replicas) computes. A branch can have one primary (read-write) compute and multiple read replica computes. A compute is required to connect to a Neon branch (where your database resides) from a client or application. To connect to a database in a branch, you must use a compute associated with that branch. The following diagram illustrates how an application connects to a branch via its compute: ```text Project |---- default branch (main) ---- compute <--- application/client | | | |---- database | ---- child branch ---- compute <--- application/client | |---- database ``` Your Neon plan determines the resources (vCPUs and RAM) available to a compute. The Neon Free plan supports computes with up to 2 vCPUs and 8 GB of RAM. Paid plans offer larger compute sizes. Larger computes consume more compute hours over the same period of active time than smaller computes. ## View a compute A compute is associated with a branch. To view a compute, select **Branches** in the Neon Console, and select a branch. If the branch has a compute, it is shown on the **Computes** tab on the branch page. Compute details shown on the **Computes** tab include: - The type of compute, which can be **Primary** (read-write) or **Read Replica** (read-only). - The compute status, typically **Active** or **Idle**. - **Endpoint ID**: The compute endpoints ID, which always starts with an `ep-` prefix; for example: `ep-quiet-butterfly-w2qres1h` - **Size**: The size of the compute. Shows autoscaling minimum and maximum vCPU values if autoscaling is enabled. - **Last active**: The date and time the compute was last active. **Edit**, **Monitor**, and **Connect** actions for a compute can be accessed from the **Computes** tab. ## Create a compute You can only create a single primary read-write compute for a branch that does not have a compute, but a branch can have multiple read replica computes. To create an endpoint: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, click **Add a compute** or **Add Read Replica** if you already have a primary read-write compute. 1. On the **Add new compute** drawer or **Add read replica** drawer, specify your compute settings, and click **Add**. Selecting the **Read replica** compute type creates a [read replica](https://neon.com/docs/introduction/read-replicas). ## Edit a compute You can edit a compute to change the [compute size](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration) or [scale to zero](https://neon.com/docs/manage/computes#scale-to-zero-configuration) configuration. To edit a compute: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. From the **Computes** tab, select **Edit** for the compute you want to edit. The **Edit** drawer opens, letting you modify settings such as compute size, the autoscaling configuration, and your scale to zero setting. 1. Once you've made your changes, click **Save**. All changes take immediate effect. For information about selecting an appropriate compute size or autoscaling configuration, see [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). ### What happens to the compute when making changes Some key points to understand about how your endpoint responds when you make changes to your compute settings: - Changing the size of your fixed compute restarts the endpoint and _temporarily disconnects all existing connections_. **Note**: When your compute resizes automatically as part of the autoscaling feature, there are no restarts or disconnects; it just scales. * Editing minimum or maximum autoscaling sizes also requires a restart; existing connections are temporarily disconnected. * If you disable scale to zero, you may need to restart your compute manually to get the latest compute-related release updates from Neon if updates are not applied automatically by a [scheduled update](https://neon.com/docs/manage/updates). Scheduled updates are applied according to certain criteria, so not all computes receive these updates automatically. See [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection. See [Handling connection disruptions](https://neon.com/docs/manage/updates#handling-connection-disruptions). ### Compute size and autoscaling configuration You can change compute size settings when [editing a compute](https://neon.com/docs/manage/computes#edit-a-compute). _Compute size_ is the number of Compute Units (CUs) assigned to a Neon compute. The number of CUs determines the processing capacity of the compute. One CU has 1 vCPU and 4 GB of RAM, 2 CUs have 2 vCPUs and 8 GB of RAM, and so on. The amount of RAM in GB is always 4 times the vCPUs, as shown in the table below. | Compute Units | vCPU | RAM | | :------------ | :--- | :----- | | .25 | .25 | 1 GB | | .5 | .5 | 2 GB | | 1 | 1 | 4 GB | | 2 | 2 | 8 GB | | 3 | 3 | 12 GB | | 4 | 4 | 16 GB | | 5 | 5 | 20 GB | | 6 | 6 | 24 GB | | 7 | 7 | 28 GB | | 8 | 8 | 32 GB | | 9 | 9 | 36 GB | | 10 | 10 | 40 GB | | 11 | 11 | 44 GB | | 12 | 12 | 48 GB | | 13 | 13 | 52 GB | | 14 | 14 | 56 GB | | 15 | 15 | 60 GB | | 16 | 16 | 64 GB | | 18 | 18 | 72 GB | | 20 | 20 | 80 GB | | 22 | 22 | 88 GB | | 24 | 24 | 96 GB | | 26 | 26 | 104 GB | | 28 | 28 | 112 GB | | 30 | 30 | 120 GB | | 32 | 32 | 128 GB | | 34 | 34 | 136 GB | | 36 | 36 | 144 GB | | 38 | 38 | 152 GB | | 40 | 40 | 160 GB | | 42 | 42 | 168 GB | | 44 | 44 | 176 GB | | 46 | 46 | 184 GB | | 48 | 48 | 192 GB | | 50 | 50 | 200 GB | | 52 | 52 | 208 GB | | 54 | 54 | 216 GB | | 56 | 56 | 224 GB | Neon supports fixed-size and autoscaling compute configurations. - **Fixed size:** Select a fixed compute size ranging from .25 CUs to 56 CUs. A fixed-size compute does not scale to meet workload demand. - **Autoscaling:** Specify a minimum and maximum compute size. Neon scales the compute size up and down within the selected compute size boundaries in response to the current load. Currently, the _Autoscaling_ feature supports a range of 1/4 (.25) CU to 16 CUs. The 1/4 CU and 1/2 CU settings are _shared compute_. For information about how Neon implements the _Autoscaling_ feature, see [Autoscaling](https://neon.com/docs/introduction/autoscaling). **Info** monitoring autoscaling: For information about monitoring your compute as it scales up and down, see [Monitor autoscaling](https://neon.com/docs/guides/autoscaling-guide#monitor-autoscaling). ### How to size your compute The size of your compute determines the amount of frequently accessed data you can cache in memory and the maximum number of simultaneous connections you can support. As a result, if your compute size is too small, this can lead to suboptimal query performance and connection limit issues. In Postgres, the `shared_buffers` setting defines the amount of data that can be held in memory. In Neon, the `shared_buffers` parameter [scales with compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size) and Neon also uses a Local File Cache (LFC) to extend the amount of memory available for caching data. The LFC can use up to 75% of your compute's RAM. The Postgres `max_connections` setting defines your compute's maximum simultaneous connection limit and is set according to your compute size configuration. The following table outlines the vCPU, RAM, LFC size (75% of RAM), and the `max_connections` limit for each compute size that Neon supports. To understand how `max_connections` is determined for an autoscaling configuration, see [Parameter settings that differ by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). **Note**: Compute size support differs by [Neon plan](https://neon.com/docs/introduction/plans). Autoscaling is supported up to 16 CU. Neon supports fixed compute sizes (no autoscaling) for computes sizes larger than 16 CU. | Compute Size (CU) | vCPU | RAM (GB) | LFC size (GB) | max_connections | | :---------------- | :--- | :------- | :------------ | :-------------- | | 0.25 | 0.25 | 1 | 0.75 | 112 | | 0.50 | 0.50 | 2 | 1.5 | 225 | | 1 | 1 | 4 | 3 | 450 | | 2 | 2 | 8 | 6 | 901 | | 3 | 3 | 12 | 9 | 1351 | | 4 | 4 | 16 | 12 | 1802 | | 5 | 5 | 20 | 15 | 2253 | | 6 | 6 | 24 | 18 | 2703 | | 7 | 7 | 28 | 21 | 3154 | | 8 | 8 | 32 | 24 | 3604 | | 9 | 9 | 36 | 27 | 4000 | | 10 | 10 | 40 | 30 | 4000 | | 11 | 11 | 44 | 33 | 4000 | | 12 | 12 | 48 | 36 | 4000 | | 13 | 13 | 52 | 39 | 4000 | | 14 | 14 | 56 | 42 | 4000 | | 15 | 15 | 60 | 45 | 4000 | | 16 | 16 | 64 | 48 | 4000 | | 18 | 18 | 72 | 54 | 4000 | | 20 | 20 | 80 | 60 | 4000 | | 22 | 22 | 88 | 66 | 4000 | | 24 | 24 | 96 | 72 | 4000 | | 26 | 26 | 104 | 78 | 4000 | | 28 | 28 | 112 | 84 | 4000 | | 30 | 30 | 120 | 90 | 4000 | | 32 | 32 | 128 | 96 | 4000 | | 34 | 34 | 136 | 102 | 4000 | | 36 | 36 | 144 | 108 | 4000 | | 38 | 38 | 152 | 114 | 4000 | When selecting a compute size, ideally, you want to keep as much of your dataset in memory as possible. This improves performance by reducing the amount of reads from storage. If your dataset is not too large, select a compute size that will hold the entire dataset in memory. For larger datasets that cannot be fully held in memory, select a compute size that can hold your [working set](https://neon.com/docs/reference/glossary#working-set). Selecting a compute size for a working set involves advanced steps, which are outlined below. See [Sizing your compute based on the working set](https://neon.com/docs/manage/computes#sizing-your-compute-based-on-the-working-set). Regarding connection limits, you'll want a compute size that can support your anticipated maximum number of concurrent connections. If you are using **Autoscaling**, it is important to remember that your `max_connections` setting is based on both your minimum and the maximum compute size. See [Parameter settings that differ by compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size) for details. To avoid any `max_connections` constraints, you can use a pooled connection with your application, which supports up to 10,000 concurrent user connections. See [Connection pooling](https://neon.com/docs/connect/connection-pooling). #### Sizing your compute based on the working set If it's not possible to hold your entire dataset in memory, the next best option is to ensure that your working set is in memory. A working set is your frequently accessed or recently used data and indexes. To determine whether your working set is fully in memory, you can query the cache hit ratio for your Neon compute. The cache hit ratio tells you how many queries are served from memory. Queries not served from memory bypass the cache to retrieve data from Neon storage (the [Pageserver](https://neon.com/docs/manage/computes#docs/reference/glossary#pageserver)), which can affect query performance. As mentioned above, Neon computes use a Local File Cache (LFC) to extend Postgres shared buffers. You can monitor the Local File Cache hit rate and your working set size from Neon's **Monitoring** page, where you'll find the following charts: - [Local file cache hit rate](https://neon.com/docs/introduction/monitoring-page#local-file-cache-hit-rate) - [Working set size](https://neon.com/docs/introduction/monitoring-page#working-set-size) Neon also provides a [neon](https://neon.com/docs/extensions/neon) extension with a `neon_stat_file_cache` view that you can use to query the cache hit ratio for your compute's Local File Cache. For more information, see [The neon extension](https://neon.com/docs/extensions/neon). #### Autoscaling considerations Autoscaling is most effective when your data (either your full dataset or your working set) can be fully cached in memory on the minimum compute size in your autoscaling configuration. Consider this scenario: If your data size is approximately 6 GB, starting with a compute size of .25 CU can lead to suboptimal performance because your data cannot be adequately cached. While your compute _will_ scale up from .25 CU on demand, you may experience poor query performance until your compute scales up and fully caches your working set. You can avoid this issue if your minimum compute size can hold your working set in memory. As mentioned above, your `max_connections` setting is based on both your minimum and maximum compute size settings. To avoid any `max_connections` constraints, you can use a pooled connection for your application. See [Connection pooling](https://neon.com/docs/connect/connection-pooling). ### Scale to zero configuration Neon's _Scale to Zero_ feature automatically transitions a compute into an idle state after 5 minutes of inactivity. You can disable scale to zero to maintain an "always-active" compute. An "always-active" configuration eliminates the few hundred milliseconds seconds of latency required to reactivate a compute but is likely to increase your compute time usage on systems where the database is not always active. For more information, refer to [Configuring scale to zero for Neon computes](https://neon.com/docs/guides/scale-to-zero-guide). **Important**: If you disable scale to zero, you may need to restart your compute manually to get the latest compute-related release updates from Neon if updates are not applied automatically by a [scheduled update](https://neon.com/docs/manage/updates). Scheduled updates are applied according to certain criteria, so not all computes receive these updates automatically. See [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). ## Restart a compute It is sometimes necessary to restart a compute. Reasons for restarting a compute might include: - Activating new limits after upgrading to a paid plan - Getting the latest compute-related updates, which Neon typically releases weekly - Accessing a recently released Postgres extension or extension version - Resolving performance issues or unexpected behavior Restarting ensures your compute is running with the latest configurations and improvements. **Important**: Restarting a compute interrupts any connections currently using the compute. To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection. You can restart a compute using these methods: - Use the **Restart compute** option in the Neon console. Navigate to the **Branches** page from your project dashboard, and select a branch. On the Computes tab, select **Restart compute** from the menu. - Issue a [Restart compute endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) call using the Neon API. You can do this directly from the Neon API Reference using the **Try It!** feature or via the command line with a cURL command similar to the one shown below. You'll need your [project ID](https://neon.com/docs/reference/glossary#project-id), compute [endpoint ID](https://neon.com/docs/reference/glossary#endpoint-id), and an [API key](https://neon.com/docs/manage/api-keys#create-an-api-key). ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects/cool-forest-86753099/endpoints/ep-calm-flower-a5b75h79/restart \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` **Note**: The [Restart compute endpoint](https://api-docs.neon.tech/reference/restartprojectendpoint) API only works on an active compute. If you're compute is idle, you can wake it up with a query or the [Start compute endpoint](https://api-docs.neon.tech/reference/startprojectendpoint) API. - Stop activity on your compute (stop running queries) and wait for your compute to suspend due to inactivity. By default, Neon suspends a compute after 5 minutes of inactivity. You can watch the status of your compute on the **Branches** page in the Neon Console. Select your branch and monitor your compute's **Status** field. Wait for it to report an `Idle` status. The compute will restart the next time it's accessed, and the status will change to `Active`. ## Delete a compute A branch can have a single read-write compute and multiple read replica computes. You can delete any of these computes from a branch. However, be aware that a compute is required to connect to a branch and access its data. If you delete a compute and add it back later, the new compute will have different connection details. To delete a compute: 1. In the Neon Console, select **Branches**. 1. Select a branch. 1. On the **Computes** tab, click **Edit** for the compute you want to delete. 1. At the bottom of the **Edit compute** drawer, click **Delete compute**. ## Manage computes with the Neon API Compute actions performed in the Neon Console can also be performed using the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The following examples demonstrate how to create, view, update, and delete computes using the Neon API. For other compute-related API methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Note**: The API examples that follow may not show all of the user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` option specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ### Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the cURL examples below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. **Note**: To learn more about the types of API keys you can create — personal, organization, or project-scoped — see [Manage API Keys](https://neon.com/docs/manage/api-keys). ### Create a compute with the API The following Neon API method creates a compute. ```http POST /projects/{project_id}/endpoints ``` The API method appears as follows when specified in a cURL command. The branch you specify cannot have an existing compute. A compute must be associated with a branch. Neon supports read-write and read replica compute. A branch can have a single primary read-write compute but supports multiple read replica computes. ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/projects/autumn-lake-30024670/endpoints' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoint": { "branch_id": "br-dry-glitter-a1rh0x6q", "type": "read_write" } }' ``` Details: Response body For attribute definitions, find the [Create compute](https://api-docs.neon.tech/reference/createprojectendpoint) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "endpoint": { "host": "ep-misty-morning-a1pfa4ez.ap-southeast-1.aws.neon.tech", "id": "ep-misty-morning-a1pfa4ez", "project_id": "autumn-lake-30024670", "branch_id": "br-dry-glitter-a1rh0x6q", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 2, "region_id": "aws-ap-southeast-1", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-08-03T17:40:19Z", "updated_at": "2025-08-03T17:40:19Z", "proxy_host": "ap-southeast-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [ { "id": "d6ef3cc2-663b-440a-88e7-ea6a59ea2c6a", "project_id": "autumn-lake-30024670", "branch_id": "br-dry-glitter-a1rh0x6q", "endpoint_id": "ep-misty-morning-a1pfa4ez", "action": "start_compute", "status": "running", "failures_count": 0, "created_at": "2025-08-03T17:40:19Z", "updated_at": "2025-08-03T17:40:19Z", "total_duration_ms": 0 } ] } ``` ### List computes with the API The following Neon API method lists computes for the specified project. A compute belongs to a Neon project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojectendpoints). ```http GET /projects/{project_id}/endpoints ``` The API method appears as follows when specified in a cURL command: ```bash curl -X 'GET' \ 'https://console.neon.tech/api/v2/projects/autumn-lake-30024670/endpoints' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [List computes](https://api-docs.neon.tech/reference/listprojectendpoints) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "endpoints": [ { "host": "ep-misty-morning-a1pfa4ez.ap-southeast-1.aws.neon.tech", "id": "ep-misty-morning-a1pfa4ez", "project_id": "autumn-lake-30024670", "branch_id": "br-dry-glitter-a1rh0x6q", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 2, "region_id": "aws-ap-southeast-1", "type": "read_write", "current_state": "idle", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-08-03T17:40:20Z", "creation_source": "console", "created_at": "2025-08-03T17:40:19Z", "updated_at": "2025-08-03T17:45:24Z", "suspended_at": "2025-08-03T17:45:24Z", "proxy_host": "ap-southeast-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, { "host": "ep-autumn-frost-a1wlmval.ap-southeast-1.aws.neon.tech", "id": "ep-autumn-frost-a1wlmval", "project_id": "autumn-lake-30024670", "branch_id": "br-dark-bar-a11jneqm", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 2, "region_id": "aws-ap-southeast-1", "type": "read_write", "current_state": "idle", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-08-03T17:34:40Z", "creation_source": "console", "created_at": "2025-08-03T11:27:50Z", "updated_at": "2025-08-03T17:41:11Z", "suspended_at": "2025-08-03T17:41:11Z", "proxy_host": "ap-southeast-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ] } ``` ### Update a compute with the API The following Neon API method updates the specified compute. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/updateprojectendpoint). ```http PATCH /projects/{project_id}/endpoints/{endpoint_id} ``` The API method appears as follows when specified in a cURL command. The example reassigns the compute to another branch by changing the `branch_id`. The branch that you specify cannot have an existing compute. A compute must be associated with a branch, and a branch can have only one primary read-write compute. Multiple read-replica computes are allowed. ```bash curl -X 'PATCH' \ 'https://console.neon.tech/api/v2/projects/autumn-lake-30024670/endpoints/ep-misty-morning-a1pfa4ez' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "endpoint": { "branch_id": "br-raspy-pine-a1hspnzv" } }' ``` Details: Response body For attribute definitions, find the [Update compute](https://api-docs.neon.tech/reference/updateprojectendpoint) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "endpoint": { "host": "ep-misty-morning-a1pfa4ez.ap-southeast-1.aws.neon.tech", "id": "ep-misty-morning-a1pfa4ez", "project_id": "autumn-lake-30024670", "branch_id": "br-raspy-pine-a1hspnzv", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 2, "region_id": "aws-ap-southeast-1", "type": "read_write", "current_state": "idle", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-08-03T17:40:20Z", "creation_source": "console", "created_at": "2025-08-03T17:40:19Z", "updated_at": "2025-08-03T17:49:01Z", "suspended_at": "2025-08-03T17:45:24Z", "proxy_host": "ap-southeast-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [] } ``` ### Delete a compute with the API The following Neon API method deletes the specified compute. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteprojectendpoint). ```http DELETE /projects/{project_id}/endpoints/{endpoint_id} ``` The API method appears as follows when specified in a cURL command. ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/autumn-lake-30024670/endpoints/ep-misty-morning-a1pfa4ez' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [Delete compute](https://api-docs.neon.tech/reference/deleteprojectendpoint) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "endpoint": { "host": "ep-misty-morning-a1pfa4ez.ap-southeast-1.aws.neon.tech", "id": "ep-misty-morning-a1pfa4ez", "project_id": "autumn-lake-30024670", "branch_id": "br-raspy-pine-a1hspnzv", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 2, "region_id": "aws-ap-southeast-1", "type": "read_write", "current_state": "idle", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "last_active": "2025-08-03T17:40:20Z", "creation_source": "console", "created_at": "2025-08-03T17:40:19Z", "updated_at": "2025-08-03T17:52:39Z", "suspended_at": "2025-08-03T17:45:24Z", "proxy_host": "ap-southeast-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" }, "operations": [] } ``` ## Compute-related issues This section outlines compute-related issues you may encounter and possible resolutions. ### No space left on device You may encounter an error similar to the following when your compute's local disk storage is full: ```bash ERROR: could not write to file "base/pgsql_tmp/pgsql_tmp1234.56.fileset/o12of34.p1.0": No space left on device (SQLSTATE 53100) ``` Neon computes allocate 20 GiB of local disk space or 15 GiB x the maximum compute size (whichever is highest) for temporary files used by Postgres. Data-intensive operations can sometimes consume all of this space, resulting in `No space left on device` errors. To resolve this issue, you can try the following strategies: - **Identify and terminate resource-intensive processes**: These could be long-running queries, operations, or possibly sync or replication activities. You can start your investigation by [listing running queries by duration](https://neon.com/docs/postgresql/query-reference#list-running-queries-by-duration). - **Optimize queries to reduce temporary file usage**. - **Adjust pipeline settings for third-party sync or replication**: If you're syncing or replicating data with an external service, modify the pipeline settings to control disk space usage. If the issue persists, refer to our [Neon Support channels](https://neon.com/docs/introduction/support#support-channels). ### Compute is not suspending In some cases, you may observe that your compute remains constantly active for no apparent reason. Possible causes for a constantly active compute when not expected include: - **Connection requests**: Frequent connection requests from clients, applications, or integrations can prevent a compute from suspending automatically. Each connection resets the scale to zero timer. - **Background processes**: Some applications or background jobs may run periodic tasks that keep the connection active. Possible steps you can take to identify the issues include: 1. **Checking for active processes** You can run the following query to identify active sessions and their states: ```sql SELECT pid, usename, query, state, query_start FROM pg_stat_activity WHERE query_start >= now() - interval '24 hours' ORDER BY query_start DESC; ``` Look for processes initiated by your users, applications, or integrations that may be keeping your compute active. 2. **Review connection patterns** - Ensure that no applications are sending frequent, unnecessary connection requests. - Consider batching connections if possible, or use [connection pooling](https://neon.com/docs/connect/connection-pooling) to limit persistent connections. 3. **Optimize any background jobs** If background jobs are needed, reduce their frequency or adjust their timing to allow Neon's scale to zero feature to activate after the defined period of inactivity (the default is 5 minutes). For more information, refer to our [Scale to zero guide](https://neon.com/docs/guides/scale-to-zero-guide). --- # Source: https://neon.com/llms/manage-database-access.txt # Manage database access > The "Manage database access" documentation outlines procedures for configuring and controlling user access to databases within the Neon platform, detailing steps for setting permissions and managing roles to ensure secure database operations. ## Source - [Manage database access HTML](https://neon.com/docs/manage/database-access): The original HTML version of this documentation Each Neon project is created with a Postgres role that is named for your database. For example, if your database is named `neondb`, the project is created with a role named `neondb_owner`. This Postgres role is automatically assigned the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which allows creating databases, roles, and reading and writing data in all tables, views, and sequences. Any user created with the Neon Console, Neon API, or Neon CLI is also assigned the `neon_superuser` role. It is good practice to reserve `neon_superuser` roles for database administration tasks like creating roles and databases. For other users, we recommend creating roles with specific sets of permissions based on application and access requirements. Then, assign the appropriate roles to your users. The roles you create should adhere to a _least privilege_ model, granting only the permissions required to accomplish their tasks. But how do you create roles with limited access? The following sections describe how to create read-only and read-write roles and assign those roles to users. We'll also look at how to create a "developer" role and grant that role full access to a database on a development branch in a Neon project. ## A word about users, groups, and roles in Postgres In Postgres, users, groups, and roles are the same thing. From the PostgreSQL [Database Roles](https://www.postgresql.org/docs/current/user-manag.html) documentation: _PostgreSQL manages database access permissions using the concept of roles. A role can be thought of as either a database user, or a group of database users, depending on how the role is set up._ Neon recommends granting privileges to roles, and then assigning those roles to your database users. ## Creating roles with limited access You can create roles with limited access via SQL. Roles created with SQL are created with the same basic [public schema privileges](https://neon.com/docs/manage/database-access#public-schema-privileges) granted to newly created roles in a standalone Postgres installation. These users are not assigned the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. They must be selectively granted permissions for each database object. The recommended approach to creating roles with limited access is as follows: 1. Use your Neon role to create roles for each application or use case via SQL. For example, create `readonly` and `readwrite` roles. 2. Grant privileges to those roles to allow access to database objects. For example, grant the `SELECT` privilege to a `readonly` role, or grant `SELECT`, `INSERT`, `UPDATE`, and `DELETE` privileges to a `readwrite` role. 3. Create your database users. For example, create users named `readonly_user1` and `readwrite_user1`. 4. Assign the `readonly` or `readwrite` role to those users to grant them the privileges associated with those roles. For example, assign the `readonly` role to `readonly_user1`, and the `readwrite` role to `readwrite_user1`. **Note**: You can remove a role from a user at any time to revoke privileges. See [Revoke privileges](https://neon.com/docs/manage/database-access#revoke-privileges). ## Create a read-only role This section describes how to create a read-only role with access to a specific database and schema. An SQL statement summary is provided at the end. **Info**: In Postgres, access must be granted at the database, schema, and object level. For example, to grant access to a table, you must also grant access to the database and schema in which the table resides. If these access permissions are not defined, the role will not be able access the table. To create a read-only role: 1. Connect to your database from an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor), [pgAdmin](https://www.pgadmin.org/), or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). If you need help connecting, see [Connect from any client](https://neon.com/docs/connect/connect-from-any-app). 2. Create a `readonly` role using the following statement. ```sql CREATE ROLE readonly PASSWORD ''; ``` The password should have at least 12 characters with a mix of lowercase, uppercase, number, and symbol characters. For detailed password guidelines, see [Manage roles with SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql). **Note**: Neon also supports the `NOLOGIN` option: `CREATE ROLE role_name NOLOGIN;` This allows you to define roles that cannot authenticate but can be granted privileges. 3. Grant the `readonly` role read-only privileges on the schema. Replace `` and `` with actual database and schema names, respectively. ```sql -- Grant permission to connect to the database GRANT CONNECT ON DATABASE TO readonly; -- Grant USAGE on the schema GRANT USAGE ON SCHEMA TO readonly; -- Grant SELECT on all existing tables in the schema GRANT SELECT ON ALL TABLES IN SCHEMA TO readonly; -- Grant SELECT on all tables added in the future ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT SELECT ON TABLES TO readonly; ``` 4. Create a database user. The password requirements mentioned above apply here as well. ```sql CREATE ROLE readonly_user1 WITH LOGIN PASSWORD ''; ``` 5. Assign the `readonly` role to `readonly_user1`: ```sql GRANT readonly TO readonly_user1; ``` The `readonly_user1` user now has read-only access to tables in the specified schema and database and should be able to connect and run `SELECT` queries. ```bash psql postgresql://readonly_user1:AbC123dEf@ep-cool-darkness-123456.us-west-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require psql (15.2 (Ubuntu 15.2-1.pgdg22.04+1), server 15.3) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. dbname=> SELECT * FROM .; ``` If the user attempts to perform an `INSERT`, `UPDATE`, or `DELETE` operation, a `permission denied` error is returned. ### SQL statement summary To create the read-only role and user described above, run the following statements from an SQL client: ```sql -- readonly role CREATE ROLE readonly PASSWORD ''; GRANT CONNECT ON DATABASE TO readonly; GRANT USAGE ON SCHEMA TO readonly; GRANT SELECT ON ALL TABLES IN SCHEMA TO readonly; ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT SELECT ON TABLES TO readonly; -- User creation CREATE USER readonly_user1 WITH PASSWORD ''; -- Grant privileges to user GRANT readonly TO readonly_user1; ``` ## Create a read-write role This section describes how to create a read-write role with access to a specific database and schema. An SQL statement summary is provided at the end. To create a read-write role: 1. Connect to your database from an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor), [pgAdmin](https://www.pgadmin.org/), or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). If you need help connecting, see [Connect from any client](https://neon.com/docs/connect/connect-from-any-app). 2. Create a `readwrite` role using the following statement. ```sql CREATE ROLE readwrite PASSWORD ''; ``` The password should have at least 12 characters with a mix of lowercase, uppercase, number, and symbol characters. For detailed password guidelines, see [Manage roles with SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql). 3. Grant the `readwrite` role read-write privileges on the schema. Replace `` and `` with actual database and schema names, respectively. ```sql -- Grant permission to connect to the database GRANT CONNECT ON DATABASE TO readwrite; -- Grant USAGE and CREATE on the schema GRANT USAGE, CREATE ON SCHEMA TO readwrite; -- Grant SELECT, INSERT, UPDATE, DELETE on all existing tables in the schema GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA TO readwrite; -- grant SELECT on all tables added in the future ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; -- Grant USAGE on all sequences in the schema GRANT USAGE ON ALL SEQUENCES IN SCHEMA TO readwrite; -- Grant USAGE on all sequences added in the future ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT USAGE ON SEQUENCES TO readwrite; ``` 4. Create a database user. The password requirements mentioned above apply here as well. ```sql CREATE ROLE readwrite_user1 WITH LOGIN PASSWORD ''; ``` 5. Assign the `readwrite` role to `readwrite_user1`: ```sql GRANT readwrite TO readwrite_user1; ``` The `readwrite_user1` user now has read-write access to tables in the specified schema and database and should able to connect and run `SELECT`, `INSERT`, `UPDATE`, `DELETE` queries. ```bash psql postgresql://readwrite_user1:AbC123dEf@ep-cool-darkness-123456.us-west-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require psql (15.2 (Ubuntu 15.2-1.pgdg22.04+1), server 15.3) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. dbname=> INSERT INTO (col1, col2) VALUES (1, 2); ``` ### SQL statement summary To create the read-write role and user described above, run the following statements from an SQL client: ```sql -- readwrite role CREATE ROLE readwrite PASSWORD ''; GRANT CONNECT ON DATABASE TO readwrite; GRANT USAGE, CREATE ON SCHEMA TO readwrite; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite; GRANT USAGE ON ALL SEQUENCES IN SCHEMA TO readwrite; ALTER DEFAULT PRIVILEGES IN SCHEMA GRANT USAGE ON SEQUENCES TO readwrite; -- User creation CREATE USER readwrite_user1 WITH PASSWORD ''; -- Grant privileges to user GRANT readwrite TO readwrite_user1; ``` ## Create a developer role This section describes how to create a "development branch" and grant developers full access to a database on the development branch. To accomplish this, we create a developer role on the "parent" branch, create a development branch, and then assign users to the developer role on the development branch. As you work through the steps in this scenario, remember that when you create a branch in Neon, you are creating a clone of the parent branch, which includes the roles and databases on the parent branch. To get started: 1. Connect to the database **on the parent branch** from an SQL client such as [psql](https://neon.com/docs/connect/query-with-psql-editor), [pgAdmin](https://www.pgadmin.org/), or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). If you need help connecting, see [Connect from any client](https://neon.com/docs/connect/connect-from-any-app). 2. Use your default Neon role or another role with `neon_superuser` privileges to create a developer role **on the parent branch**. For example, create a role named `dev_users`. ```sql CREATE ROLE dev_users PASSWORD ''; ``` The password should have at least 12 characters with a mix of lowercase, uppercase, number, and symbol characters. For detailed password guidelines, see [Manage roles with SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql). 3. Grant the `dev_users` role privileges on the database: ```sql GRANT ALL PRIVILEGES ON DATABASE TO dev_users; ``` You now have a `dev_users` role on your parent branch, and the role is not assigned to any users. This role will now be included in all future branches created from this branch. **Note**: The `GRANT` statement above does not grant privileges on existing schemas, tables, sequences, etc., within the database. If you want the `dev_users` role to access specific schemas, tables, etc., you need to grant those permissions explicitly. For example, to grant all privileges on all tables in a schema: ```sql GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA TO dev_users; ``` Similarly, you'd grant privileges for sequences and other objects as needed. That said, the `GRANT` command above allows users with the `dev_users` role to create new schemas within the database. But for pre-existing schemas and their objects, you need to grant permissions explicitly. 4. Create a development branch. Name it something like `dev1`. See [Create a branch](https://neon.com/docs/manage/branches#create-a-branch) for instructions. 5. Connect to the database **on the development branch** with an SQL client. Be mindful that a child branch connection string differs from a parent branch connection string. The branches reside on different hosts. If you need help connecting to your branch, see [Connect from any client](https://neon.com/docs/connect/connect-from-any-app). 6. After connecting the database on your new branch, create a developer user (e.g., `dev_user1`). The password requirements described above apply here as well. ```sql CREATE ROLE dev_user1 WITH LOGIN PASSWORD ''; ``` 7. Assign the `dev_users` role to the `dev_user1` user: ```sql GRANT dev_users TO dev_user1; ``` The `dev_user1` user can now connect to the database on your development branch and start using the database with full privileges. ```bash psql postgresql://dev_user1:AbC123dEf@ep-cool-darkness-123456.us-west-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require psql (15.2 (Ubuntu 15.2-1.pgdg22.04+1), server 15.3) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. dbname=> ``` ### SQL statement summary ```sql -- dev_users role CREATE ROLE dev_users PASSWORD `password`; GRANT ALL PRIVILEGES ON DATABASE TO dev_users; -- optionally, grant access to an existing schema GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA TO dev_users; -- User creation CREATE ROLE dev_user1 WITH LOGIN PASSWORD ''; -- Grant privileges to user GRANT dev_users TO dev_user1; ``` ## Revoke privileges If you set up privilege-holding roles as describe above, you can revoke privileges by removing assigned roles. For example, to remove the `readwrite` role from `readwrite_user1`, run the following SQL statement: ```sql REVOKE readwrite FROM readwrite_user1; ``` ## Public schema privileges When creating a new database, Postgres creates a schema named `public` in the database and permits access to the schema to a predefined Postgres role named `public`. Newly created roles in Postgres are automatically assigned the `public` role. In Postgres 14, the public role has `CREATE` and `USAGE` privileges on the `public` schema. In Postgres 15 and higher, the `public` role has only `USAGE` privileges on the `public` schema. Why does this matter? If you create a new role and want to limit access for that role, you should be aware of the default `public` schema access automatically assigned to newly created roles. If you want to limit access to the `public` schema for your users, you have to revoke privileges on the `public` schema explicitly. For users of Postgres 14, the SQL statement to revoke the default `CREATE` permission on the `public` schema from the `public` role is as follows: ```sql REVOKE CREATE ON SCHEMA public FROM PUBLIC; ``` You must be the owner of the `public` schema or a member of a role that authorizes you to execute this SQL statement. To restrict the `public` role's capability to connect to a database, use this statement: ```sql REVOKE ALL ON DATABASE FROM PUBLIC; ``` This ensures users are unable to connect to a database by default unless this permission is explicitly granted. ## More information For more information about granting privileges in Postgres, please see the [GRANT](https://www.postgresql.org/docs/current/sql-grant.html) command in the _PostgreSQL documentation_. --- # Source: https://neon.com/llms/manage-databases.txt # Manage databases > The "Manage databases" documentation guides Neon users through the processes of creating, configuring, and managing databases within the Neon platform, detailing specific commands and settings for effective database administration. ## Source - [Manage databases HTML](https://neon.com/docs/manage/databases): The original HTML version of this documentation A database is a container for SQL objects such as schemas, tables, views, functions, and indexes. In the [Neon object hierarchy](https://neon.com/docs/manage/overview), a database exists within a branch of a project. There is a limit of 500 databases per branch. If you do not specify your own database name when creating a project, your project's default branch is created with a database called `neondb`, which is owned by your project's default role (see [Manage roles](https://neon.com/docs/manage/roles) for more information). You can create your own databases in a project's default branch or in a child branch. All databases in Neon are created with a `public` schema. SQL objects are created in the `public` schema, by default. For more information about the `public` schema, refer to [The Public schema](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PUBLIC), in the _PostgreSQL documentation_. **Note**: As of Postgres 15, only a database owner has the `CREATE` privilege on a database's `public` schema. For other users, the `CREATE` privilege must be granted manually via a `GRANT CREATE ON SCHEMA public TO ;` statement. For more information, see [Public schema privileges](https://neon.com/docs/manage/database-access#public-schema-privileges). Databases belong to a branch. If you create a child branch, databases from the parent branch are copied to the child branch. For example, if database `mydb` exists in the parent branch, it will be copied to the child branch. The only time this does not occur is when you create a branch that includes data up to a particular point in time. If a database was created in the parent branch after that point in time, it is not duplicated in the child branch. Neon supports creating and managing databases from the following interfaces: - [Neon Console](https://neon.com/docs/manage/databases#manage-databases-in-the-neon-console) - [Neon CLI](https://neon.com/docs/manage/databases#manage-databases-with-the-neon-cli) - [Neon API](https://neon.com/docs/manage/databases#manage-databases-with-the-neon-api) - [SQL](https://neon.com/docs/manage/databases#manage-databases-with-sql) ## Manage databases in the Neon Console This section describes how to create, view, and delete databases in the Neon Console. The role that creates a database is automatically made the owner of that database. The `neon_superuser` role is also granted all privileges on databases created in the Neon Console. For information about this role, see [The neon_superuser role](https://neon.com/docs/manage/roles#the-neonsuperuser-role). ### Create a database To create a database: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. Select a project. 1. Select **Branches** from the sidebar. 1. Select the branch where you want to create the database. 1. Select the **Roles** & **Databases** tab. 1. Click **Add database**. 1. Enter a database name, and select a database owner. 1. Click **Create**. **Note**: Some names are not permitted. See [Reserved database names](https://neon.com/docs/manage/databases#reserved-database-names). ### View databases To view databases: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. Select a project. 1. Select **Branches** from the sidebar. 1. Select the branch where you want to view databases. 1. Select the **Roles** & **Databases** tab. ### Delete a database Deleting a database is a permanent action. All database objects belonging to the database such as schemas, tables, and roles are also deleted. To delete a database: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. Select a project. 1. Select **Databases** from the sidebar. 1. Select a branch to view the databases in the branch. 1. For the database you want to delete, click the delete icon. 1. In the confirmation dialog, click **Delete**. ## Manage databases with the Neon CLI The Neon CLI supports creating and deleting databases. For instructions, see [Neon CLI commands — databases](https://neon.com/docs/reference/cli-databases). ## Manage databases with the Neon API Database actions performed in the Neon Console can also be also performed using the Neon API. The following examples demonstrate how to create, view, update, and delete databases using the Neon API. For other database-related methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). In Neon, a database belongs to a branch, which means that when you create a database, it is created in a branch. Database-related requests are therefore performed using branch API methods. **Note**: The API examples that follow may not show all user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to the method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` option specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ### Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the cURL examples below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. **Note**: To learn more about the types of API keys you can create — personal, organization, or project-scoped — see [Manage API Keys](https://neon.com/docs/manage/api-keys). ### Create a database with the API The following Neon API method creates a database. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createprojectbranchdatabase). The role specified by `owner_name` is the owner of that database. ```http POST /projects/{project_id}/branches/{branch_id}/databases ``` **Note**: Some names are not permitted for databases. See [Reserved database names](https://neon.com/docs/manage/databases#reserved-database-names). The API method appears as follows when specified in a cURL command. The `project_id` and `branch_id` are required parameters, and a database `name` and `owner` are required attributes. ```bash curl 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/databases' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "database": { "name": "mydb", "owner_name": "casey" } }' | jq ``` Details: Response body For attribute definitions, find the [Create database](https://api-docs.neon.tech/reference/createprojectbranchdatabase) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "database": { "id": 2889509, "branch_id": "br-morning-meadow-afu2s1jl", "name": "mydb", "owner_name": "casey", "created_at": "2025-08-04T08:14:14Z", "updated_at": "2025-08-04T08:14:14Z" }, "operations": [ { "id": "b51c8ece-b78e-49f7-8ec1-78b37cbae3c4", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T08:14:14Z", "updated_at": "2025-08-04T08:14:14Z", "total_duration_ms": 0 } ] } ``` ### List databases with the API The following Neon API method lists databases for the specified branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojectbranchdatabases). ```http GET /projects/{project_id}/branches/{branch_id}/databases ``` The API method appears as follows when specified in a cURL command. The `project_id` and `branch_id` are required parameters. ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/databases' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [List databases](https://api-docs.neon.tech/reference/listprojectbranchdatabases) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "databases": [ { "id": 1139149, "branch_id": "br-blue-tooth-671580", "name": "neondb", "owner_name": "casey", "created_at": "2023-01-04T18:38:23Z", "updated_at": "2023-01-04T18:38:23Z" }, { "id": 1140822, "branch_id": "br-blue-tooth-671580", "name": "mydb", "owner_name": "casey", "created_at": "2023-01-04T21:17:17Z", "updated_at": "2023-01-04T21:17:17Z" } ] } ``` ### Update a database with the API The following Neon API method updates the specified database. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/updateprojectbranchdatabase). ```http PATCH /projects/{project_id}/branches/{branch_id}/databases/{database_name} ``` The API method appears as follows when specified in a cURL command. The `project_id` and `branch_id` are required parameters. This example updates the database `name` value to `database1`. ```bash curl -X PATCH 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/databases/mydb' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "database": { "name": "database1" } }' | jq ``` Details: Response body For attribute definitions, find the [Update database](https://api-docs.neon.tech/reference/updateprojectbranchdatabase) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "database": { "id": 2889509, "branch_id": "br-morning-meadow-afu2s1jl", "name": "database1", "owner_name": "casey", "created_at": "2025-08-04T08:14:14Z", "updated_at": "2025-08-04T08:14:14Z" }, "operations": [ { "id": "2f8c0a6a-33b5-4d56-964b-739614b699c0", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T08:17:22Z", "updated_at": "2025-08-04T08:17:22Z", "total_duration_ms": 0 } ] } ``` ### Delete a database with the API The following Neon API method deletes the specified database. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteprojectbranchdatabase). ```http DELETE /projects/{project_id}/branches/{branch_id}/databases/{database_name} ``` The API method appears as follows when specified in a cURL command. The `project_id`, `branch_id`, and `database_name` are required parameters. ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/databases/database1' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [Delete database](https://api-docs.neon.tech/reference/deleteprojectbranchdatabase) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "database": { "id": 2889509, "branch_id": "br-morning-meadow-afu2s1jl", "name": "database1", "owner_name": "casey", "created_at": "2025-08-04T08:14:14Z", "updated_at": "2025-08-04T08:14:14Z" }, "operations": [ { "id": "4cd4881b-2807-4377-a76d-8e7d39bc5448", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T08:19:39Z", "updated_at": "2025-08-04T08:19:39Z", "total_duration_ms": 0 } ] } ``` ## Manage databases with SQL You can create and manage databases in Neon with SQL, as you can with any standalone Postgres installation. To create a database, issue a `CREATE DATABASE` statement from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor) or from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). ```sql CREATE DATABASE testdb; ``` Most standard [Postgres CREATE DATABASE parameters](https://www.postgresql.org/docs/current/sql-createdatabase.html) are supported with the exception of `TABLESPACE`. This parameter requires access to the local file system, which is not permitted in Neon. The role that creates a database is the owner of the database. **Note**: As of Postgres 15, only a database owner has the `CREATE` privilege on a database's `public` schema. For other users, the `CREATE` privilege on the `public` schema must be granted explicitly via a `GRANT CREATE ON SCHEMA public TO ;` statement. For more information, see [Public schema privileges](https://neon.com/docs/manage/database-access#public-schema-privileges). For more information about database object privileges in Postgres, see [Privileges](https://www.postgresql.org/docs/current/ddl-priv.html). ## Reserved database names The following names are reserved and cannot be given to a database: - `postgres` - `template0` - `template1` --- # Source: https://neon.com/llms/manage-integrations.txt # Manage integrations > The "Manage integrations" documentation outlines the procedures for configuring and managing third-party integrations within the Neon platform, facilitating seamless connectivity and interaction with external services. ## Source - [Manage integrations HTML](https://neon.com/docs/manage/integrations): The original HTML version of this documentation The **Integrations** page in the Neon Console provides a hub for managing third-party integrations with your Neon project. You can use these supported integrations to optimize and extend Neon's functionality and streamline your workflow. When visiting the **Integrations** page, you'll notice different categories of integrations, which you can browse to find the one you need. ## Manage integrations For integrations listed as **Added**, you can click **Manage** on the integration card to configure or remove the integration. ## Add integrations For integrations that are not added, you can click **Add** and follow the instructions to get started. Some integrations support an automated integration setup. Others are documented integrations, which involve a manual setup procedure. ## Manual integrations Integrations currently requiring a manual setup have a **Read** button, which opens a modal where you can read about how to integrate the selected platform or service with Neon. ## Express interest in future integrations Integrations that are not yet available have a **Request** button, which opens a modal where you can express your interest and share your use case. This information helps the Neon team prioritize integration rollouts and build exactly what you need. ## Suggest an integration If you can't find the integration you're looking for: 1. Click the **Suggest an integration** button on the **Integrations** page. 2. Fill out the necessary details for the integration you'd like to see added. 3. Click **Suggest integration**. The Neon team will review your request. --- # Source: https://neon.com/llms/manage-maintenance-updates-overview.txt # Maintenance & updates overview > The "Maintenance & updates overview" document outlines the procedures and schedules for maintaining and updating Neon databases, detailing the processes to ensure system reliability and performance. ## Source - [Maintenance & updates overview HTML](https://neon.com/docs/manage/maintenance-updates-overview): The original HTML version of this documentation Neon performs two types of updates: **platform maintenance** and **updates** to your Neon [computes](https://neon.com/docs/reference/glossary#compute). While both are essential for maintaining a stable, secure, and optimized environment, they serve different purposes. - **Platform maintenance** includes updates to Neon's infrastructure, resource management operations, and critical security patches. These changes ensure platform stability and security. To learn more, see [Platform maintenance](https://neon.com/docs/manage/platform-maintenance). - **Updates** apply improvements and updates to individual Neon computes, including Postgres updates, operating system patches, and new Neon features. These updates keep your Neon compute environment and Postgres instances current and optimized. To learn more, see [Updates](https://neon.com/docs/manage/updates). For both types of updates, we strive to minimize disruption to database operations and provide advanced notification. The table below outlines where you can check for upcoming maintenance and updates. ### Where to check for maintenance and updates | Type | Where to check | Details | | ------------------------ | ------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Platform maintenance** | [Neon Status](https://neonstatus.com/) | Check the regional status page where your Neon project resides for upcoming platform maintenance. Optionally, subscribe to a regional status page to receive status updates. See [Neon Status](https://neon.com/docs/introduction/status) for details. | | **Updates** | [Neon Console](https://console.neon.tech/app/projects) | On your Neon project dashboard, go to **Settings** > **Updates** to view your update window and check for update notices. Paid plans allow you to select a preferred update window. | --- # Source: https://neon.com/llms/manage-operations.txt # System operations > The "System operations" document outlines procedures for managing and maintaining Neon database systems, detailing tasks such as monitoring, scaling, and troubleshooting to ensure optimal performance and reliability. ## Source - [System operations HTML](https://neon.com/docs/manage/operations): The original HTML version of this documentation An operation is an action performed by the Neon Control Plane on a Neon object or resource. Operations are typically initiated by user actions, such as creating a branch or deleting a database. Other operations may be initiated by the Neon Control Plane, such as suspending a [compute](https://neon.com/docs/reference/glossary#compute) after a period of inactivity or checking its availability. You can monitor operations to keep an eye on the overall health of your Neon project or to check the status of specific operations. When working with the Neon API, you can poll the status of operations to ensure that an API request is completed before issuing the next API request. For more information, see [Poll operation status](https://neon.com/docs/manage/operations#poll-operation-status). | Operation | Description | | :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `Apply config` | Applies a new configuration to a Neon object or resource. For example, changing compute settings or creating, deleting, or updating Postgres users and databases initiates this operation. | | `Apply storage config` | Applies a new configuration to a Neon storage object or resource. For example, updating the restore window for a project initiates this operation. | | `Check availability` | Checks the availability of data in a branch and that a [compute](https://neon.com/docs/reference/glossary#compute) can start on a branch. Branches without a compute are not checked. This operation, performed by the [availability checker](https://neon.com/docs/reference/glossary#availability-checker), is a periodic load generated by the Control Plane. | | `Create branch` | Creates a [branch](https://neon.com/docs/reference/glossary#branch) in a Neon project. For related information, see [Manage branches](https://neon.com/docs/manage/branches). | | `Create timelime` | Sets up storage and creates the default branch when a Neon [project](https://neon.com/docs/reference/glossary#project) is created. | | `Delete tenant` | Deletes stored data when a Neon project is deleted. | | `Start compute` | Starts a compute when there is an event or action that requires compute resources. For example, connecting to a suspended compute initiates this operation. | | `Suspend compute` | Suspends a compute after a period of inactivity. For information about how Neon manages compute resources, see [Compute lifecycle](https://neon.com/docs/introduction/compute-lifecycle/). | | `Tenant attach` | Attaches a Neon project to storage. | | `Tenant detach` | Detaches a Neon project from storage after the project as been idle for an extended period. | | `Tenant reattach` | Reattaches a detached Neon project to storage when a detached project receives a request. | | `Timeline archive` | The time when a branch archive operation was initiated. | | `Timeline unarchive` | The time when the branch unarchive operation was initiated. | ## View operations You can view system operations via the Neon Console, [Neon CLI](https://neon.com/docs/reference/neon-cli), or [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Tab: Neon Console You can view system operations via the **Monitoring** page in the Neon Console. Operation details include: - **Operation**: The action performed by the operation. - **Branch**: The branch on which the operation was performed. - **Compute**: The compute on which the operation occurred. - **Operation status**: The status of the operation. - **Duration**: The duration of the operation. - **Date**: The date and time the operation occurred. Possible **Status** values are `OK`, `Scheduling`, `In progress`, and `Error`. Tab: CLI To view operation using the Neon CLI: ```bash neon operations list --project-id ``` See [Neon CLI commands — operations](https://neon.com/docs/reference/cli-operations). Tab: API To list operations with the Neon API: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` See [Get a list of operations](https://api-docs.neon.tech/reference/listprojectoperations). ## Operations and the Neon API This section describes how to work with operations using the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The following topics are covered: - [List operations](https://neon.com/docs/manage/operations#list-operations): Describes how to list all operations for a Neon project. - [List operations with pagination](https://neon.com/docs/manage/operations#list-operations-with-pagination): Describes how to list all operations for a Neon project and paginate the response. - [Get operation](https://neon.com/docs/manage/operations#get-operation): Describes how to retrieve the details for a specific operation by the operation ID. - [Poll operation status](https://neon.com/docs/manage/operations#poll-operation-status): Describes how to poll an operation for its status, which may be necessary to avoid failed requests due to in-progress operations when using the Neon API programmatically. **Note**: Operation names have underscores when view using the API; for example: ### List operations Lists operations for the specified project. This method supports response pagination. For more information, see [List operations with pagination](https://neon.com/docs/manage/operations#list-operations-with-pagination). ```text /projects/{project_id}/operations ``` cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [List operations](https://api-docs.neon.tech/reference/listprojectoperations) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "operations": [ { "id": "97c7a650-e4ff-43d7-8c58-4c67f5050167", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T08:47:52Z", "updated_at": "2022-12-09T08:47:56Z" }, { "id": "0f3daf10-2544-425c-86d3-9a9932ab25b9", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T04:47:39Z", "updated_at": "2022-12-09T04:47:44Z" }, { "id": "fb8484df-51b4-4a40-b0fc-97b73998892b", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T02:47:05Z", "updated_at": "2022-12-09T02:47:09Z" } ], "pagination": { "cursor": "2022-12-07T00:45:05.262011Z" } } ``` ### List operations with pagination Pagination allows you to limit the number of operations displayed, as the number of operations for a project can be large. To paginate responses, issue an initial request with a `limit` value. For brevity, the limit is set to 1 in the following example. cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations?limit=1' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [List operations](https://api-docs.neon.tech/reference/listprojectoperations) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "operations": [ { "id": "97c7a650-e4ff-43d7-8c58-4c67f5050167", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T08:47:52Z", "updated_at": "2022-12-09T08:47:56Z" } ], "pagination": { "cursor": "2022-12-09T08:47:52.20417Z" } } ``` To list the next page of operations, add the `cursor` value returned in the response body of the previous request and a `limit` value for the next page. ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations?cursor=2022-12-09T08%3A47%3A52.20417Z&limit=1' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [List operations](https://api-docs.neon.tech/reference/listprojectoperations) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "operations": [ { "id": "0f3daf10-2544-425c-86d3-9a9932ab25b9", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T04:47:39Z", "updated_at": "2022-12-09T04:47:44Z" } ], "pagination": { "cursor": "2022-12-09T04:47:39.797163Z" } } ``` ### Get operation This method shows only the details for the specified operation ID. ```text /projects/{project_id}/operations/{operation_id} ``` cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations/97c7a650-e4ff-43d7-8c58-4c67f5050167' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [Retrieve operation details](https://api-docs.neon.tech/reference/getprojectoperation) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "operation": { "id": "97c7a650-e4ff-43d7-8c58-4c67f5050167", "project_id": "autumn-disk-484331", "branch_id": "br-wispy-dew-591433", "endpoint_id": "ep-orange-art-714542", "action": "check_availability", "status": "finished", "failures_count": 0, "created_at": "2022-12-09T08:47:52Z", "updated_at": "2022-12-09T08:47:56Z" } } ``` ## Poll operation status Some Neon API requests may take a few moments to complete. When using the Neon API programmatically, you can check the `status` of an operation before proceeding with the next API request. For example, you may want to check the operation status of a create branch request before issuing a create database request for that branch. The response to a Neon API request includes information about the operations that were initiated. For example, a create branch request initiates `create_branch` and `start_compute` operations. ```json "operations": [ { "id": "22acbb37-209b-4b90-a39c-8460090e1329", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "action": "create_branch", "status": "running", "failures_count": 0, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" }, { "id": "055b17e6-ffe3-47ab-b545-cfd7db6fd8b8", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "endpoint_id": "ep-small-bush-675287", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" } ] ``` You can use the [Get operation details](https://api-docs.neon.tech/reference/getprojectoperation) method to poll the status of an operation by the operation ID. You might do this at intervals of 5 seconds until the `status` of the operation changes to `finished` before issuing the next request. For example, this request polls the `start_compute` operation shown above: ```bash curl 'https://console.neon.tech/api/v2/projects/autumn-disk-484331/operations/055b17e6-ffe3-47ab-b545-cfd7db6fd8b8' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [Get operation details](https://api-docs.neon.tech/reference/getprojectoperation) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "operation": { "id": "055b17e6-ffe3-47ab-b545-cfd7db6fd8b8", "project_id": "autumn-disk-484331", "branch_id": "br-dawn-scene-747675", "endpoint_id": "ep-small-bush-675287", "action": "start_compute", "status": "finished", "failures_count": 0, "created_at": "2022-12-08T19:55:43Z", "updated_at": "2022-12-08T19:55:43Z" } } ``` Possible operation `status` values include: `scheduling`, `running`, `finished`, `failed`, `cancelling`, `cancelled`, and `skipped`. Only `finished`, `skipped`, and `cancelled` are **terminal statuses**, meaning the operation will not proceed further from these states. Note that `failed` is **not** terminal, as an operation in a `failed` state can still be retried. --- # Source: https://neon.com/llms/manage-organizations.txt # Organizations > The "Organizations" documentation outlines how Neon users can manage and configure organizational settings, including creating, modifying, and deleting organizations within the Neon platform. ## Source - [Organizations HTML](https://neon.com/docs/manage/organizations): The original HTML version of this documentation In Neon, all projects live within organizations. When you sign up, you automatically get a free organization for your first project. Organizations provide a central place to manage your projects, collaborate with team members, and — for paid plans — handle your billing. ## About Neon Organizations In the Neon Console, the Organizations page gives you a centralized view of all your projects. From there, you can create new projects, manage existing ones, and oversee your members, billing information, and access to preview features through the [Early Access Program](https://neon.com/docs/introduction/roadmap#organization-early-access). ## User roles and permissions Organizations have two main member roles: - **Admin** — Full control over the organization and all its projects. - **Member** — Access to all organization projects, but cannot modify org settings or delete projects. For a full breakdown of what each role can do, see the [User Permissions](https://neon.com/docs/manage/user-permissions) page. ## Creating a new organization You can create additional organizations at any time. [See how to create an organization.](https://neon.com/docs/manage/orgs-manage#create-an-organization) ## Limitations As we continue to refine our organization features, here are some temporary limitations you should be aware of: - **Branch management** — All users are currently able to manage [protected branches](https://neon.com/docs/guides/protected-branches), regardless of their role or permission level. Granular permissions for this feature are not yet implemented. - **Permissions and roles** — The current permissions system may not meet all needs for granular control. Users are encouraged to share their feedback and requirements for more detailed permissions settings. ## Feedback If you've got feature requests or feedback about what you'd like to see from Organizations in Neon, let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/manage-orgs-api-consumption.txt # Query organization usage metrics with the Neon API > The document details how to use the Neon API to query and organize usage metrics for organizations, enabling precise monitoring and management of resource consumption within the Neon platform. ## Source - [Query organization usage metrics with the Neon API HTML](https://neon.com/docs/manage/orgs-api-consumption): The original HTML version of this documentation You can use the Neon API to retrieve two types of consumption history metrics for your organization: | Metric | Description | Plan Availability | | ------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------- | ----------------- | | [Account-level](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) | Total usage across all projects in your organization | Scale | | [Project-level](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) (granular) | Project-level metrics available at hourly, daily, or monthly level of granularity | Scale | ## Finding organizations for consumption queries Before querying consumption metrics, you'll need the `org_id` values for organizations you want to query. Use your personal API key to list all organizations you have access to: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/users/me/organizations' \ --header 'accept: application/json' \ --header 'authorization: Bearer $PERSONAL_API_KEY' | jq ``` The response includes details about each organization, including the `org_id` you'll need for consumption queries: ```json { "organizations": [ { "id": "org-morning-bread-81040908", "name": "Morning Bread Organization", "handle": "morning-bread-organization-org-morning-bread-81040908", "plan": "free_v2", "created_at": "2025-04-30T14:43:00Z", "managed_by": "console", "updated_at": "2025-04-30T14:46:22Z" }, { "id": "org-super-grass-41324851", "name": "Super Org Inc", "handle": "super-org-inc-org-super-grass-41324851", "plan": "scale_v2", "created_at": "2025-06-02T16:56:18Z", "managed_by": "console", "updated_at": "2025-06-02T16:56:18Z" } ] } ``` ## Account-level metrics To get global totals for all projects in the organization `org-ocean-art-12345678`, include the `org_id` in the `GET /consumption/projects` request. Required parameters: - A start date - An end date - A level of granularity The following example requests hourly metrics between June 30th and July 2nd, 2024: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/consumption_history/account?from=2024-06-30T15%3A30%3A00Z&to=2024-07-02T15%3A30%3A00Z&granularity=hourly&org_id=org-ocean-art-12345678' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' ``` The response will provide aggregated hourly consumption metrics, including `active_time_seconds`, `compute_time_seconds`, `written_data_bytes`, and `synthetic_storage_size_bytes`, for each hour between June 30 and July 2. Details: Response body For attribute definitions, find the [Retrieve account consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperaccount) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "periods": [ { "period_id": "random-period-abcdef", "period_plan": "scale", "period_start": "2024-06-01T00:00:00Z", "consumption": [ { "timeframe_start": "2024-06-30T15:00:00Z", "timeframe_end": "2024-06-30T16:00:00Z", "active_time_seconds": 147452, "compute_time_seconds": 43215, "written_data_bytes": 111777920, "synthetic_storage_size_bytes": 41371988928 }, { "timeframe_start": "2024-06-30T16:00:00Z", "timeframe_end": "2024-06-30T17:00:00Z", "active_time_seconds": 147468, "compute_time_seconds": 43223, "written_data_bytes": 110483584, "synthetic_storage_size_bytes": 41467955616 } // ... More consumption data ] }, { "period_id": "random-period-ghijkl", "consumption": [ { "timeframe_start": "2024-07-01T00:00:00Z", "timeframe_end": "2024-07-01T01:00:00Z", "active_time_seconds": 145672, "compute_time_seconds": 42691, "written_data_bytes": 115110912, "synthetic_storage_size_bytes": 42194712672 }, { "timeframe_start": "2024-07-01T01:00:00Z", "timeframe_end": "2024-07-01T02:00:00Z", "active_time_seconds": 147464, "compute_time_seconds": 43193, "written_data_bytes": 110078200, "synthetic_storage_size_bytes": 42291858520 } // ... More consumption data ] } // ... More periods ] } ``` ### Project-level metrics (granular) You can also get similar daily, hourly, or monthly metrics across a selected time period, but broken out for each individual project that belongs to your organization. Using the endpoint `GET /consumption_history/projects`, let's use the same start date, end date, and level of granularity as our account-level request: hourly metrics between June 30th and July 2nd, 2024. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/consumption_history/projects?limit=10&from=2024-06-30T00%3A00%3A00Z&to=2024-07-02T00%3A00%3A00Z&granularity=hourly&org_id=org-ocean-art-12345678' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' ``` Details: Response body For attribute definitions, find the [Retrieve project consumption metrics](https://api-docs.neon.tech/reference/getconsumptionhistoryperproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "projects": [ { "project_id": "random-project-123456", "periods": [ { "period_id": "random-period-abcdef", "period_plan": "scale", "period_start": "2024-06-30T00:00:00Z", "consumption": [ { "timeframe_start": "2024-06-30T00:00:00Z", "timeframe_end": "2024-06-30T01:00:00Z", "active_time_seconds": 147472, "compute_time_seconds": 43222, "written_data_bytes": 112730864, "synthetic_storage_size_bytes": 37000959232 }, { "timeframe_start": "2024-07-01T00:00:00Z", "timeframe_end": "2024-07-01T01:00:00Z", "active_time_seconds": 1792, "compute_time_seconds": 533, "written_data_bytes": 0, "synthetic_storage_size_bytes": 0 } // ... More consumption data ] }, { "period_id": "random-period-ghijkl", "period_plan": "scale", "period_start": "2024-07-01T09:00:00Z", "consumption": [ { "timeframe_start": "2024-07-01T09:00:00Z", "timeframe_end": "2024-07-01T10:00:00Z", "active_time_seconds": 150924, "compute_time_seconds": 44108, "written_data_bytes": 114912552, "synthetic_storage_size_bytes": 36593552376 } // ... More consumption data ] } // ... More periods ] } // ... More projects ] } ``` ### Project-level metrics (for the current billing period) To get basic billing period-based consumption metrics for each project in the organization `org-ocean-art-12345678`, include `org_id` in the `GET /projects` request for consumption metrics: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects?org_id=org-ocean-art-12345678' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' ``` See more details about using this endpoint on the [Manage billing with consumption limits](https://neon.com/docs/guides/consumption-limits#retrieving-metrics-for-all-projects) page in our Platform integration guide. ## Metric definitions - **active_time_seconds** — The number of seconds the project's computes have been active during the period. - **compute_time_seconds** — The number of CPU seconds used by the project's computes, including computes that have been deleted; for example: - A compute that uses 1 CPU for 1 second is equal to `compute_time=1`. - A compute that uses 2 CPUs simultaneously for 1 second is equal to `compute_time=2`. - **written_data_bytes** — The total amount of data written to all of a project's branches. - **synthetic_storage_size_bytes** — The total space occupied in storage. Synthetic storage size combines the logical data size and Write-Ahead Log (WAL) size for all branches. --- # Source: https://neon.com/llms/manage-orgs-api.txt # Manage organizations using the Neon API > The document details how to manage organizations using the Neon API, including creating, updating, and deleting organizations, and managing organization members and roles within the Neon platform. ## Source - [Manage organizations using the Neon API HTML](https://neon.com/docs/manage/orgs-api): The original HTML version of this documentation Learn how to manage Neon Organizations using the Neon API, including managing organization API keys, working with organization members, and handling member invitations. ## Personal vs organization API keys You can authorize your API requests using either of these methods: - **Organization API key**: Automatically scopes all requests to your organization - **Personal API key**: Requires including an `org_id` parameter to specify which organization you're working with The key difference is in how you structure your API requests. Here's an example of listing projects using both methods: Using an organization API key: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects' \ --header 'authorization: Bearer $ORG_API_KEY' ``` Using a personal API key: ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects?org_id=org-example-12345678' \ --header 'authorization: Bearer $PERSONAL_API_KEY' ``` Both examples retrieve a list of projects, but notice how the personal API key request includes `org_id=org-example-12345678` to specify which organization's projects to list. With an organization API key, this parameter isn't needed because the key itself is already tied to a specific organization. ### Matrix of operations and key types Some operations require a personal API key from an organization admin and cannot be performed using organization API keys. These operations are marked with ❌ in the matrix below. | Action | Personal API Key | Organization API Key | | ----------------------------------------------------------------------------------- | ---------------- | -------------------- | | [Create an organization API key](https://neon.com/docs/manage/orgs-api#create-api-keys) | ✅ | ❌ | | [Get a list of organization API keys](https://neon.com/docs/manage/orgs-api#list-api-keys) | ✅ | ✅ | | [Revoke an organization API key](https://neon.com/docs/manage/orgs-api#revoke-an-api-key) | ✅ | ✅ | | [Get organization details](https://neon.com/docs/manage/orgs-api#get-organization-details) | ✅ | ✅ | | [Get organization members details](https://neon.com/docs/manage/orgs-api#list-members) | ✅ | ✅ | | [Get organization member details](https://neon.com/docs/manage/orgs-api#get-member-details) | ✅ | ✅ | | [Update the role for an organization member](https://neon.com/docs/manage/orgs-api#update-member-role) | ✅ | ✅ | | [Remove member from the organization](https://neon.com/docs/manage/orgs-api#remove-member) | ✅ | ❌ | | [Get organization invitation details](https://neon.com/docs/manage/orgs-api#list-invitations) | ✅ | ✅ | | [Create organization invitations](https://neon.com/docs/manage/orgs-api#create-invitations) | ✅ | ❌ | | [Transfer projects between organizations](https://neon.com/docs/manage/orgs-api#transfer-projects-between-organizations) | ✅ | ❌ | ## Finding your org_id To find your organization's `org_id`, navigate to your Organization's **Settings** page, where you'll find it under the **General information** section. Copy and use this ID in your API requests. ## Create API keys There are two types of organization API keys: - **Organization API keys** — Provide admin-level access to all organization resources, including projects, members, and settings. Only organization admins can create these keys. - **Project-scoped organization API keys** — Provide limited, member-level access to specific projects within the organization. Any organization member can create a key for any organization-owned project. The key token is only displayed once at creation time. Copy it immediately and store it securely. If lost, you'll need to revoke the key and create a new one. For detailed instructions, see [Manage API Keys](https://neon.com/docs/manage/api-keys#create-an-organization-api-key). [Try in API Reference](https://api-docs.neon.tech/reference/createorgapikey) ## List API keys Lists all API keys for your organization. The response does not include the actual key tokens, as these are only provided when creating a new key. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys' \ --header 'authorization: Bearer $PERSONAL_API_KEY' | jq ``` Example response: ```json [ { "id": 123456, "name": "my-key-name", "created_at": "2024-01-01T12:00:00Z", "created_by": { "id": "user-abc123de-4567-8fab-9012-3cdef4567890", "name": "John Smith", "image": "https://avatar.example.com/user.jpg" }, "last_used_at": "2024-01-01T12:30:00Z", "last_used_from_addr": "192.0.2.1,192.0.2.2" } ] ``` [Try in API Reference](https://api-docs.neon.tech/reference/listorgapikeys) ## Revoke an API key Revokes the specified organization API key. This action cannot be reversed. You can obtain the `key_id` by listing the API keys for your organization. ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/api_keys/{key_id}' \ --header 'accept: application/json' \ --header 'authorization: Bearer $PERSONAL_API_KEY' | jq ``` Example response: ```json { "id": 123456, "name": "my-key-name", "created_at": "2024-01-01T12:00:00Z", "created_by": "user-abc123de-4567-8fab-9012-3cdef4567890", "last_used_at": "2024-01-01T12:30:00Z", "last_used_from_addr": "192.0.2.1,192.0.2.2", "revoked": true } ``` [Try in API Reference](https://api-docs.neon.tech/reference/revokeorgapikey) ## Get organization details Retrieves information about your organization, including its name, plan, and creation date. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}' \ --header 'authorization: Bearer $PERSONAL_API_KEY' | jq ``` Example response: ```json { "id": "org-example-12345678", "name": "Example Organization", "handle": "example-organization-org-example-12345678", "plan": "business", "created_at": "2024-01-01T12:00:00Z", "managed_by": "console", "updated_at": "2024-01-01T12:00:00Z" } ``` [Try in API Reference](https://api-docs.neon.tech/reference/getorganization) ## List members Lists all members in your organization. Each entry includes: - Member ID (`id`): The unique identifier for the member - User ID (`user_id`): The unique ID of the user's Neon account - Organization role and join date - User's email address ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/members' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' | jq ``` Example response: ```json { "members": [ { "member": { "id": "abc123de-4567-8fab-9012-3cdef4567890", "user_id": "def456gh-7890-1abc-2def-3ghi4567890j", "org_id": "org-example-12345678", "role": "admin", "joined_at": "2024-01-01T12:00:00Z" }, "user": { "email": "user@example.com" } } ] } ``` [Try in API Reference](https://api-docs.neon.tech/reference/getorganizationmembers) **Note**: The member ID (`id`) from this response is needed for operations like updating roles or removing members. ## Get member details Retrieves information about a specific member using their member ID (obtained from the [List members](https://neon.com/docs/manage/orgs-api#list-members) endpoint). ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/members/{member_id}' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' ``` Example response: ```json { "id": "abc123de-4567-8fab-9012-3cdef4567890", "user_id": "def456gh-7890-1abc-2def-3ghi4567890j", "org_id": "org-example-12345678", "role": "admin", "joined_at": "2024-01-01T12:00:00Z" } ``` [Try in API Reference](https://api-docs.neon.tech/reference/getorganizationmember) ## Update member role Changes a member's current role in the organization. If using your personal API key, you need to be an admin in the organization to perform this action. Note: you cannot downgrade the role of the organization's only admin. ```bash curl --request PATCH \ --url 'https://console.neon.tech/api/v2/organizations/members/{member_id}' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' \ --header 'content-type: application/json' \ --data '{"role": "admin"}' | jq ``` Example response: ```json { "id": "abc123de-4567-8fab-9012-3cdef4567890", "user_id": "def456gh-7890-1abc-2def-3ghi4567890j", "org_id": "org-example-12345678", "role": "admin", "joined_at": "2024-01-01T12:00:00Z" } ``` [Try in API Reference](https://api-docs.neon.tech/reference/updateorganizationmember) ## Remove member You must use your personal API key and have admin-level permissions in the organization to use this endpoint. Organization API keys are not supported. ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/members/{member_id}' \ --header 'accept: application/json' \ --header 'authorization: Bearer $PERSONAL_API_KEY' ``` [Try in API Reference](https://api-docs.neon.tech/reference/removeorganizationmember) ## List invitations Retrieves a list of all pending invitations for the organization. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/organizations/invitations' \ --header 'accept: application/json' \ --header 'authorization: Bearer $ORG_API_KEY' | jq ``` Example response: ```json { "invitations": [ { "id": "abc123de-4567-8fab-9012-3cdef4567890", "email": "user@example.com", "org_id": "org-example-12345678", "invited_by": "def456gh-7890-1abc-2def-3ghi4567890j", "invited_at": "2024-01-01T12:00:00Z", "role": "member" } ] } ``` [Try in API Reference](https://api-docs.neon.tech/reference/getorganizationinvitations) ## Create invitations Creates invitations for new organization members. Each invited user: - Receives an email notification about the invitation - If they have an existing Neon account, they automatically join as a member - If they don't have an account yet, the email invites them to create one You must use your personal API key and have admin-level permissions in the organization to use this endpoint. Organization API keys are not supported. ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/organizations/{org_id}/invitations' \ --header 'accept: application/json' \ --header 'authorization: Bearer $PERSONAL_API_KEY' \ --header 'content-type: application/json' \ --data '{ "invitations": [ { "email": "user@example.com", "role": "member" } ] }' | jq ``` [Try in API Reference](https://api-docs.neon.tech/reference/createorganizationinvitations) ## Transfer projects between organizations The API supports transferring projects between organizations. For detailed instructions and examples, see [Transfer projects to an organization](https://neon.com/docs/manage/orgs-project-transfer). Key requirements: - Must use a personal API key - Requires admin permissions in the source organization and at least member permissions in the target [Try in API Reference](https://api-docs.neon.tech/reference/transferproject) --- # Source: https://neon.com/llms/manage-orgs-cli.txt # Manage Organizations using the Neon CLI > The document details how to manage organizations using the Neon CLI, including creating, listing, and deleting organizations, as well as managing organization members and roles. ## Source - [Manage Organizations using the Neon CLI HTML](https://neon.com/docs/manage/orgs-cli): The original HTML version of this documentation Neon's CLI provides an expanding set of commands to manage your organizations. ## Authorization Use the `auth` command to authenticate your Neon account from the CLI. This command opens a browser where you will be asked to grant the necessary permissions to managae both your personal and organization resources. Note that authentication is tied to your personal account. Once authenticated, you can access and manage any Organizations that you belong to. See [Auth - CLI](https://neon.com/docs/reference/cli-auth) to learn more. ## List Organizations The `neon orgs list` command outputs a list of all organizations that the CLI user currently belongs to. This command is useful for quickly identifying the `org_id` associated with each organization, which can be used in other CLI operations. Example: ```bash neon orgs list Organizations ┌────────────────────────┬──────────────────┐ │ Id │ Name │ ├────────────────────────┼──────────────────┤ │ org-ocean-art-12345678 │ Example Org │ └────────────────────────┴──────────────────┘ ``` See [Orgs - CLI](https://neon.com/docs/reference/cli-orgs) to learn more. ## Manage projects within an Organization The Neon CLI `projects` command supports an `--org-id` option. This allows you to list or create projects within a specified organization. Example: Listing all projects in an organization: ```bash neon projects list --org-id org-xxxx-xxxx Projects ┌───────────────────────────┬───────────────────────────┬────────────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ bright-moon-12345678 │ dev-backend-api │ aws-us-east-2 │ 2024-07-26T11:43:37Z │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ silent-forest-87654321 │ test-integration-service │ aws-eu-central-1 │ 2024-05-30T22:14:49Z │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ crystal-stream-23456789 │ staging-web-app │ aws-us-east-2 │ 2024-05-17T13:47:35Z │ └───────────────────────────┴───────────────────────────┴────────────────────┴──────────────────────┘ ``` You can include the `org-id` to apply the following subcommands specifically to your organization: - [List projects](https://neon.com/docs/reference/cli-projects#list) - [Create projects](https://neon.com/docs/reference/cli-projects#create) See [Projects - CLI](https://neon.com/docs/reference/cli-projects) to learn more. ## Setting Organization Context To simplify your workflow, the Neon CLI `set-context` command supports setting an organization context. This means you don't have to specify an organization ID every time you run a CLI command. Sees [set-context - CLI](https://neon.com/docs/reference/cli-set-context) to learn more. --- # Source: https://neon.com/llms/manage-orgs-manage.txt # Manage Neon Organizations > The "Manage Neon Organizations" documentation outlines procedures for creating, managing, and configuring organizations within the Neon platform, detailing user roles, permissions, and organizational settings. ## Source - [Manage Neon Organizations HTML](https://neon.com/docs/manage/orgs-manage): The original HTML version of this documentation Learn how to manage your organization's projects, invite Members, revise permissions, and oversee billing details. This section explains which specific actions each Member can take based on their assigned roles and permissions. ## Create an organization To create a new org, use the **Create organization** button in the org switcher in the top navbar. Select a plan and enter billing details. Other than the free org you signed up with, organizations are always on paid plans. When creating a new organization, you'll need to select a paid plan and enter billing details. After confirming, you'll be directed to your new organization's **Projects** page, where you can get started creating projects and inviting [members](https://neon.com/docs/manage/orgs-manage#invite-members). ## Invite Members Only Admins have the authority to invite new Members to the organization. Invitations are issued via email. If a recipient does not have a Neon account, they will receive instructions to create one. To invite Members: - Navigate to the **People** page in your Organization. - Click **Invite member** and enter the email addresses in a comma-separated list. - Monitor the status of sent invites on the **Pending Invites** section; from here, you can resend or cancel invitations as needed. **Note** Invites not received?: If invite emails aren't received, they may be in spam or quarantined. Recipients should check these folders and mark Neon emails as safe. ## Set permissions Permissions within the organization are exclusively managed by Admins. As an Admin: - You can promote any Member to an Admin, granting them full administrative privileges. - You can demote any admin to a regular Member. - You cannot leave the organization if you are the only Admin. Promote a Member to Admin before you try to leave the org. ## Invite Collaborators Any member can invite external users to [collaborate](https://neon.com/docs/guides/project-collaboration-guide) on specific projects. For example, if you want to give limited access to a contractor. Members can invite collaborators from a project's **Settings** page. If any project in your organization has collaborators, you'll also see the option to invite and manage collaborators from the organization'e **People** page. Collaborators _do not_ have access to the organization. They access their shared projects by selecting the **Projects shared with me** option in the org switcher. **Note**: Organization members don't need Collaborator invites as they already have full project access. When projects are transferred to an organization, existing collaborator permissions for organization members are automatically removed. To invite new Collaborators, click **Invite collaborators** and select the project you want to share, then add a comma-separated list of emails for anyone you want to give access to. These users will receive an email inviting them to the project. **Note** Invites not received?: If invite emails aren't received, they may be in spam or quarantined. Recipients should check these folders and mark Neon emails as safe. ### Manage Collaborators Click the More Options menu next to the row in the **Collaborators** table to manage Collaborator access. You have two options: - **Convert to member** — Admins can promote an external Collaborator to a full Member. When promoted, their collaborator permissions will be automatically removed since they'll have access to all projects as a Member. - **Remove from project** — All members can revoke the Collaborator's access to the shared project. ## Create and delete projects All Members can create new projects from the Organization's **Projects** page; however, the organization itself retains ownership of these projects, not the individual user. - Any Member can create a project under the organization's ownership. - Only Admins can delete projects owned by the organization. ## Manage billing When you create a new organization, you'll choose a plan (Launch, Scale, or Scale) for that organization. Each organization manages its own billing and plan. As the Admin for the organization account: - You have full access to edit all billing information. - Promote a Member to Admin if you want to delegate billing management; however, all Admins will have the capability to edit billing details. - While all Members can view the **Billing** page, only admins can make changes. For detailed information on pricing and plans, refer to [Neon plans](https://neon.com/docs/introduction/plans). ### Downgrade to Free plan You can only have one Free organization per account, and Free orgs are just for personal use (no team members). If you already have a Free org, you can't downgrade another org to Free—you'll see an error if you try. To downgrade, your org must: - Have only one member (just you) - Stay within Free plan limits (storage, projects, branches, etc.) If you need help or think you should be able to downgrade, use the **Request support** option during the downgrade process. [See Neon plans for details.](https://neon.com/docs/introduction/plans) ## Delete an organization Only Admins can delete an Organization. Before doing so, make sure all projects within the Organization are removed. In your Organization's **Settings** page, you'll find the **Delete** section. It will list any actions you need to take before deletion is allowed. For example, if you still have outstanding projects that need to be removed, you'll see: Complete any necessary steps. Once cleared, you can go ahead and delete. This action will permanently remove the organization and cancel its associated billing. _It can't be reversed._ ## More actions Here are a couple additional things you can do with your organization: **passwordless authentication** and **renaming an organization**. ### Passwordless authentication If you want the simplest way to connect to your database from the command line, passwordless authentication using `pg.neon.tech` lets you directly start a `psql` connection with any of your organization's databases. This saves you time versus logging in to the Console and copying your connection string manually. ```bash psql -h pg.neon.tech ``` In the output, you'll get a URL you can paste into your browser. Log in if you need to. Or if you're already logged in, you'll be asked to select from your personal or organization account, select your project, and then your compute. After that, go back to your terminal and you'll be connected to your selected database. For example: ```bash alexlopez@alex-machine ~ % psql -h pg.neon.tech NOTICE: Welcome to Neon! Authenticate by visiting: https://console.neon.tech/psql_session/secure_token NOTICE: Connecting to database. psql (16.1, server 16.3) SSL connection (secure connection details hidden) Type "help" for help. alexlopez=> ``` ### Rename an organization Only Admins can rename an organization. Go to the **Settings** page under **General information**. Changing the organization name applies globally—the new name will appear for everyone in the organization. --- # Source: https://neon.com/llms/manage-orgs-project-transfer.txt # Transfer projects > The document outlines the process for transferring projects between organizations within Neon, detailing the necessary steps and requirements to ensure a smooth transition. ## Source - [Transfer projects HTML](https://neon.com/docs/manage/orgs-project-transfer): The original HTML version of this documentation You can transfer your projects to any organization you are a member of. You can do this individually from project **Settings**, in bulk from organization **Settings**, or via the Neon API. ## Limits & requirements - Transfer up to **200** projects at a time in the Console, or **400** via the API. - Limited by the destination org's plan. - Requires **Admin** rights in the source org and at least **Member** rights in the destination org. - Projects with GitHub or Vercel integrations cannot be transferred. - Vercel-managed orgs are not supported. - Projects can only be transferred to organizations you belong to, not to personal Neon accounts. ## Transfer a single project Navigate to the **Settings** page of the project you want to transfer, and select **Transfer** from the sidebar. Then use the dialog to select the organization you want to transfer this project into. Since this removes the project from your current org, you need **Admin** rights in this org to move the project (like if you wanted to delete the project). You only need **Member** access in the destination org. ## Transfer multiple projects at once Use the org switcher in the top navbar to select the Organization that owns the projects you want to move. From Organization **Settings**, select **Transfer projects** from the sidebar and use the dialog to: - Choose the **projects** you want to move - Choose the **org** you want to move them to You'll need **Admin** rights in the source org, and at least **Member** rights in the destination. ## Via API (for automation or large numbers of projects) You can also transfer projects from one org to another using the Neon API: `POST /organizations/{source_org_id}/projects/transfer` **You'll need:** - ✅ [Personal API key (with access to both orgs)](https://neon.com/docs/manage/api-keys#create-a-personal-api-key) - ✅ Admin rights in the source org - ✅ At least Member rights in the destination org - ✅ Compatible billing plans between orgs (for example, projects can move from Scale to Launch but not the other way around) **Example request** ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/organizations/{source_org_id}/projects/transfer' \ --header 'accept: application/json' \ --header 'authorization: Bearer $API_KEY' \ --header 'content-type: application/json' \ --data '{ "project_ids": [ "project-id-1", "project-id-2" ], "destination_org_id": "destination-org-id" }' ``` Where: - `source_org_id` (in URL path) is the organization where projects currently reside - `destination_org_id` is the organization receiving the projects - `project_ids` is an array of up to 400 project IDs to transfer ### Response behavior A successful transfer returns a 200 status code with an empty JSON object: ```json {} ``` You can verify the transfer in the Neon Console or by listing the projects in the destination organization via API. ### Error responses The API may return these errors: - **`406`** – Transfer failed - the target organization has too many projects or its plan is incompatible with the source organization. Reduce projects or upgrade the organization. - **`422`** – One or more of the provided project IDs have GitHub or Vercel integrations installed. Transferring integration projects is currently not supported. --- # Source: https://neon.com/llms/manage-overview.txt # Overview of the Neon object hierarchy > The document outlines the structure and components of Neon's object hierarchy, detailing how various elements like projects, branches, endpoints, and databases are organized and interrelated within the Neon platform. ## Source - [Overview of the Neon object hierarchy HTML](https://neon.com/docs/manage/overview): The original HTML version of this documentation Managing your Neon environment requires an understanding of the Neon object hierarchy. At the top level, an **Organization** contains one or more **Projects**. Each Project contains **Branches**, which in turn contain **Computes**, **Roles**, and **Databases**. The diagram below illustrates this hierarchy. ## Neon account Your Neon account represents your user profile and is used for authentication, personal settings, and managing personal API keys. You can sign up for a Neon account with an email, GitHub, Google, or partner account. A single Neon account can belong to multiple organizations. **API keys** can be personal (global to your account) or scoped to an organization or project. For more details, see [Manage API keys](https://neon.com/docs/manage/api-keys). ## Organizations Organizations are the top-level containers for projects and resources in Neon. They allow you to organize and manage a team's projects under a single Neon account — with billing, role management, and project transfer capabilities all in one accessible location in the Neon Console. ## Projects A project is a container for all objects except for API keys, which are global and work with any project owned by your Neon account. Branches, computes, roles, and databases belong to a project. A Neon project also defines the region where project resources reside. A Neon account can have multiple projects, but plan limits define the number of projects per Neon account. For more information, see [Manage projects](https://neon.com/docs/manage/projects). ## Default branch Data resides in a branch. Each Neon project is created with a default branch called `production`. This initial branch is also your project's root branch, which cannot be deleted. After creating more branches, you can designate a different branch as your default branch, but your root branch cannot be deleted. You can create child branches from any branch in your project. Each branch can contain multiple databases and roles. Plan limits define the number of branches you can create in a project and the amount of data per branch. To learn more, see [Manage branches](https://neon.com/docs/manage/branches). ## R/W computes and Read Replicas A compute is a virtualized computing resource that includes vCPU and memory for running applications. In the context of Neon, a compute runs Postgres. When you create a project in Neon, a primary R/W (read/write) compute is created for a project's default branch. Neon supports both R/W and [Read Replica](https://neon.com/docs/introduction/read-replicas) computes. A branch can have a single primary R/W compute but supports multiple Read Replica computes. To connect to a database that resides on a branch, you must connect via a R/W or Read Replica compute associated with the branch. Your Neon plan defines the resources (vCPU and RAM) available to your R/W and Read Replica computes. For more information, see [Manage computes](https://neon.com/docs/manage/computes). Compute size, autoscaling, and scale to zero are all settings that are configured for R/W and Read Replica computes. ## Roles In Neon, roles are Postgres roles. A role is required to create and access a database. A role belongs to a branch. There is a limit of 500 roles per branch. The default branch of a Neon project is created with a role named for your database. For example, if your database is named `neondb`, the project is created with a role named `neondb_owner`. This role is the owner of the database. Any role created via the Neon Console, CLI, or API is created with [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) privileges. For more information, see [Manage roles](https://neon.com/docs/manage/roles). ## Databases As with any standalone instance of Postgres, a database is a container for SQL objects such as schemas, tables, views, functions, and indexes. In Neon, a database belongs to a branch. If you do not specify your own database name when creating a project, the default branch of your project is created with a ready-to-use database named `neondb`. There is a limit of 500 databases per branch. For more information, see [Manage databases](https://neon.com/docs/manage/databases). ## Schemas All databases in Neon are created with a `public` schema, which is the default behavior for any standard PostgreSQL instance. SQL objects are created in the `public` schema, by default. For more information about the `public` schema, refer to [The Public schema](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PUBLIC), in the _PostgreSQL documentation_. --- # Source: https://neon.com/llms/manage-platform-maintenance.txt # Platform maintenance > The "Platform Maintenance" document outlines procedures and schedules for maintaining Neon's database platform, detailing system updates, downtime management, and user notification processes. ## Source - [Platform maintenance HTML](https://neon.com/docs/manage/platform-maintenance): The original HTML version of this documentation Neon occasionally performs essential **platform maintenance** outside of [scheduled updates](https://neon.com/docs/manage/updates) performed on Neon computes. This means that you may experience brief disruptions from time to time for these important updates. Platform maintenance may include any of the following: - Neon infrastructure updates and upgrades (e.g., updates to Neon Kubernetes clusters or compute nodes) - Resource management updates (e.g., rebalancing of compute nodes) - Critical security patches (e.g., addressing a zero-day vulnerability) We strive to avoid disruptions as much as possible, but certain updates may require compute restarts or result in temporary latency for operations like compute starts, queries, or API requests. **Note**: Whenever possible, we perform platform maintenance outside of normal business hours in affected regions to minimize disruption. ## Where to check for maintenance For notification of planned platform maintenance, you can monitor or subscribe to the [Neon Status page](https://neonstatus.com/) for your region. To learn more, see [Neon Status](https://neon.com/docs/introduction/status). If there is ongoing maintenance, you'll see a **Maintenance** indicator at the top of the Neon Console. Clicking on the indicator takes you to the Neon Status page where you can read the maintenance notification. ## Handling disruptions and latency during platform maintenance Most Postgres connection drivers include built-in retry mechanisms that automatically handle short-lived interruptions. This means that most applications automatically reconnect to a Neon database following a brief disruption. However, if your application has strict availability requirements, you may want to ensure that your connection settings are configured to allow for connection retries. Check your driver's documentation for options like connection timeouts, retry intervals, and connection pooling strategies. Your configuration should account for occasional disruptions. For related information, see [Build connection timeout handling into your application](https://neon.com/docs/connect/connection-latency#build-connection-timeout-handling-into-your-application). If your application or integration uses the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) or [SDKs](https://neon.com/docs/reference/sdk) that wrap the Neon API, we recommend building in the same type of retry logic. --- # Source: https://neon.com/llms/manage-platform.txt # Platform overview > The "Platform overview" document outlines Neon's cloud-native architecture, detailing its components, such as compute, storage, and control planes, and explains how they interact to manage PostgreSQL databases efficiently. ## Source - [Platform overview HTML](https://neon.com/docs/manage/platform): The original HTML version of this documentation ## Access & collaboration Manage your account, your team, and who can access your project's databases. - [Accounts](https://neon.com/docs/manage/accounts): About Neon account types - [User permissions](https://neon.com/docs/manage/user-permissions): Manage user permissions and access levels - [Organizations](https://neon.com/docs/manage/organizations): Build your team in Neon - [Project collaboration](https://neon.com/docs/guides/project-collaboration-guide): Collaborate on your projects with other users - [Database access](https://neon.com/docs/manage/database-access): Learn how to manage user access to your databases using roles - [API keys](https://neon.com/docs/manage/api-keys): Generate and manage API keys - [Account recovery](https://neon.com/docs/manage/account-recovery): Recover your account and reset your password ## Projects & resources Learn how to manage all aspects of your Neon projects. These topics cover the basics of setting up your projects through the UI (create, edit, delete) as well as practical guidance and best practices around managing project resources. - [Object hierarchy](https://neon.com/docs/manage/overview): Learn about the Neon project and all its resources - [Projects](https://neon.com/docs/manage/projects): Create and manage projects in Neon - [Branches](https://neon.com/docs/manage/branches): Learn about database branching in Neon - [Computes](https://neon.com/docs/manage/computes): Configure and optimimze compute resources for your Neon projects - [Roles](https://neon.com/docs/manage/roles): Manage roles within projects and assign permissions - [Databases](https://neon.com/docs/manage/databases): Manage your database from the Console, CLI, or API - [Tables](https://neon.com/docs/guides/tables): Use the Tables page to easily view, edit, and manage your database entries - [Integrations](https://neon.com/docs/manage/integrations): Manage third-party integrations with your Neon project ## Monitoring & observability Monitor your Neon projects to track system health and performance. - [Overview](https://neon.com/docs/introduction/monitoring): Learn about monitoring resources and metrics in Neon - [Monitoring dashboard](https://neon.com/docs/introduction/monitoring-page): Dashboard graphs for monitoring system and database metrics - [System operations](https://neon.com/docs/manage/operations): Track actions taken by the control plane on project resources - [Active queries](https://neon.com/docs/introduction/monitor-active-queries): View and analyze running queries in your database - [Query performance](https://neon.com/docs/introduction/monitor-query-performance): View and analyze query performance for your Neon database - [Datadog](https://neon.com/docs/guides/datadog): Monitor your database with Datadog - [Grafana Cloud](https://neon.com/docs/guides/grafana-cloud): Monitor your database with Grafana Cloud - [OpenTelemetry](https://neon.com/docs/guides/opentelemetry): Monitor your database with OpenTelemetry - [Metrics and logs reference](https://neon.com/docs/reference/metrics-logs): Metrics and logs reference for monitoring - [Better Stack](https://neon.com/guides/betterstack-otel-neon): Monitor Neon with Better Stack using OpenTelemetry integration - [New Relic](https://neon.com/guides/newrelic-otel-neon): Monitor Neon with New Relic using OpenTelemetry integration - [pgAdmin](https://neon.com/docs/introduction/monitor-pgadmin): Monitor your Neon Postgres database with pgAdmin - [PgHero](https://neon.com/docs/introduction/monitor-pghero): Monitor your Neon Postgres database with PgHero ## Security & compliance Learn how Neon secures your projects and data, and explore the security features available for you to use. - [Overview](https://neon.com/docs/security/security-overview): Overview of Neon's security features - [Security reporting](https://neon.com/docs/security/security-reporting): Report security vulnerabilities and incidents - [Compliance](https://neon.com/docs/security/compliance): Learn how Neon complies with various standards - [HIPAA](https://neon.com/docs/security/hipaa): HIPAA compliance with Neon - [Acceptable Use Policy](https://neon.com/docs/security/acceptable-use-policy): Read about Neon's acceptable use policies - [AI use in Neon](https://neon.com/docs/security/ai-use-in-neon): Learn about how AI is used in Neon ## Operations & maintenance - [Backups](https://neon.com/docs/manage/backups): An overview of backup strategies for Neon Postgres - [Backup with pg_dump](https://neon.com/docs/manage/backup-pg-dump): Learn how to create a backup of your Neon database using pg_dump - [Automate pg_dump backups](https://neon.com/docs/manage/backup-pg-dump-automate): Automate backups of your Neon database to S3 with pg_dump and GitHub Actions - [Updates overview](https://neon.com/docs/manage/maintenance-updates-overview): Overview of Neon platform maintenance and compute updates - [Platform maintenance](https://neon.com/docs/manage/platform-maintenance): Find out how Neon manages essential platform maintenance and critical security updates - [Updates](https://neon.com/docs/manage/updates): Learn about updates for Neon computes and Postgres - [Regions](https://neon.com/docs/introduction/regions): Learn about Neon regions and availability --- # Source: https://neon.com/llms/manage-projects.txt # Manage projects > The "Manage projects" documentation outlines the procedures for creating, configuring, and managing projects within the Neon platform, detailing steps for project setup, access control, and resource allocation. ## Source - [Manage projects HTML](https://neon.com/docs/manage/projects): The original HTML version of this documentation In Neon, the project is your main workspace. Within a project, you create branches for different workflows, like environments, features, or previews. Each branch contains its own databases, roles, computes, and replicas. Your [Neon Plan](https://neon.com/docs/introduction/plans) determines how many projects you can create and the resource limits within those projects. ## Default resources When you add a new project, Neon creates the following resources by default: - Two branches are created for you by default: `production` (your main branch for production workloads) and `development` (a child branch for development work). You can create additional child branches from either of these, or from any other branch. For more information, see [Manage branches](https://neon.com/docs/manage/branches). - A single primary read-write compute. This is the compute associated with the branch. For more information, see [Manage computes](https://neon.com/docs/manage/computes). - A Postgres database that resides on the project's default branch. If you did not specify your own database name when creating the project, the database created is named `neondb`. - A Postgres role that is named for your database. For example, if your database is named `neondb`, the project is created with a default role named `neondb_owner`. - Each [Neon plan](https://neon.com/docs/introduction/plans) comes with a specific storage allowance. Beyond this allowance on paid plans, extra usage costs apply. Billing-related allowances aside, Neon projects can support data sizes up to 4 TiB. To increase this limit, [contact the Neon Sales team](https://neon.com/contact-sales). ## Create a project The following instructions describe how to create additional Neon projects. If you are creating your very first Neon project, refer to the instructions in [Playing with Neon](https://neon.com/docs/get-started/signing-up). To create a Neon project: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Click **New Project**. 3. Specify values for **Project Name**, **Postgres version**, **Cloud service provider**, and **Region**. Project names are limited to 64 characters. 4. Click **Create Project**. After creating a project, you are directed to the **Project Dashboard**. **Tip**: You can also use [pg.new](https://pg.new) to create a new Neon Postgres project. Simply visit [pg.new](https://pg.new) and you'll be taken directly to the **Create project** page where you can create your new project. ## View projects To view your projects: 1. Navigate to the [Neon Console](https://console.neon.tech). 1. From the breadcrumb navigation menu at the top-left of the console, select your organization. 1. The **Projects** page lists your projects, including any projects that have been shared with you. ## Project settings Once you open a project, you can use the **Settings** page to manage your project and configure any defaults. The **Settings** page includes these sub-pages: - **General** — Change the name of your project or copy the project ID. - **Compute** — Set the scale to zero and sizing defaults for any new computes you create when branching. - **Instant restore** — Set the restore window to enable instant restore, time travel queries, and branching from past states. - **Updates** — Schedule a time for Postgres and Neon updates. - **Collaborators** — Invite external collaborators to join your Neon project. - **Network security** — Configure Neon's IP and Private Networking features for secure access. - **RLS** — Configure Neon Row-Level Security (RLS) to apply row-level security policies to your Neon project. - **Logical replication** — Enable logical replication to replicate data from your Neon project to external data services and platforms. - **Transfer** — Transfer your project from the current organization to another organization you are a member of. - **Delete** — Use with care! This action deletes your entire project and all its objects, and is irreversible. ### General project settings On the **General** page, you are permitted to change the name of your project or copy the project ID. The project ID is permanent and cannot be changed. ### Change your project's default compute settings You can change your project's default compute settings on the **Compute** page. These settings determine the compute resources allocated to any new branches or read replicas you create. **Important**: Changes to default compute settings only affect **newly created computes**. Existing computes, including those on your primary branch and read replicas, will not be automatically updated. To change settings for existing computes, you need to update them individually through the **Branches** page. A Compute Unit (CU) represents 1 vCPU with 4 GB of RAM. New branches inherit compute settings from your first branch, but you can change these defaults to: - Set smaller compute sizes for preview deployments and development branches - Standardize settings across read replicas - Optimize resource usage and costs for non-production workloads Neon supports two compute configurations: - **Fixed size:** Select a fixed compute size ranging from .25 CUs to 56 CUs - **Autoscaling:** Specify minimum and maximum compute sizes (from .25 CU to 16 CUs) to automatically scale based on workload. Note: When setting maximum above 10 CUs, the minimum must be at least max/8. For more information, see [Autoscaling](https://neon.com/docs/introduction/autoscaling) ### Configure your restore window By default, Neon retains a history of changes for all branches in your project, enabling features like: - [Instant restore](https://neon.com/docs/introduction/branch-restore) for recovering lost data - [Time Travel](https://neon.com/docs/guides/time-travel-assist) queries for investigating data issues If you extend this restore window, you'll expand the range of data recovery and query options, but note that this will also increase your instant restore storage. Also note that adjusting the restore window affects _all_ branches in your project. To configure the restore window for a project: 1. Select a project in the Neon Console. 2. On your **Project Dashboard**, select **Settings**. 3. Select **Restore window**. 4. Use the slider to select the restore window. 5. Click **Save**. For information about restore window limits and default settings, see [Neon plans](https://neon.com/docs/introduction/plans). ### Schedule updates for your project To keep your Neon computes and Postgres instances up to date, Neon automatically applies scheduled updates that include Postgres minor version upgrades, security patches, and new features. Updates are applied to the computes within your project. They require a quick compute restart, take only a few seconds, and typically occur weekly. On the Free plan, updates are automatically scheduled. On paid plans, you can set a preferred day and time for updates. Restarts occur within your selected time window and take only a few seconds. To set your project's update schedule or view currently scheduled updates: 1. Go to **Settings** > **Updates**. 1. Choose a day of the week and an hour. Updates will occur within this time window and take only a few seconds. For more information, see [Updates](https://neon.com/docs/manage/updates). ### Invite collaborators to a project Neon's project collaboration feature allows you to invite external Neon accounts to collaborate on a Neon project. **Note**: Organization members cannot be added as collaborators to organization-owned projects since they already have access to all projects through their organization membership. To invite collaborators to a Neon project: 1. In the Neon Console, select a project. 1. Select **Settings**. 1. Select **Collaborators**. 1. Select **Invite** and enter the email address of the account you want to collaborate with. 1. Click **Invite**. The email you specify is added to the list of **Collaborators**. The Neon account associated with that email address is granted full access to the project, with the exception of privileges required to delete the project. This account can also invite other Neon users to the project. When that user logs in to Neon, the project they were invited to is listed on their **Projects** page under **Shared with you**. The costs associated with projects being collaborated on are charged to the Neon account that owns the project. For example, if you invite another Neon user account to a project you own, any usage incurred by that user within your project is billed to your Neon account, not theirs. For additional information, refer to our [Project collaboration guide](https://neon.com/docs/guides/project-collaboration-guide). ### Configure IP Allow The IP Allow feature provides an added layer of security for your data, restricting access to the branch where your database resides to only those IP addresses that you specify. In Neon, the IP allowlist is applied to all branches by default. Optionally, you can allow unrestricted access to your project's non-protected branches. For instance, you might want to restrict access to protected branches to a handful of trusted IPs while allowing unrestricted access to your development branches. By default, Neon allows IP addresses from `0.0.0.0`, which means that Neon accepts connections from any IP address. Once you configure IP Allow by adding IP addresses or ranges, only those IP addresses will be allowed to access Neon. **Note**: Neon projects provisioned on AWS support both [IPv4](https://en.wikipedia.org/wiki/Internet_Protocol_version_4) and [IPv6](https://en.wikipedia.org/wiki/IPv6) addresses. Neon project provisioned on Azure currently on support IPv4. Tab: Neon Console To configure an allowlist: 1. Select a project in the Neon Console. 2. On the **Project Dashboard**, select **Settings**. 3. Select **Network security**. 4. Under **IP Allow**, specify the IP addresses you want to permit. Separate multiple entries with commas. 5. Optionally, under **Branch access**, select **Restrict IP Access to protected branches only** to restrict access to only the branches you have designated as protected. 6. Click **Save changes**. Tab: CLI The [Neon CLI ip-allow command](https://neon.com/docs/reference/cli-ip-allow) supports IP Allow configuration. For example, the following `add` command adds IP addresses to the allowlist for an existing Neon project. Multiple entries are separated by a space. No delimiter is required. ```bash neon ip-allow add 203.0.113.0 203.0.113.1 ┌─────────────────────┬─────────────────────┬──────────────┬─────────────────────┐ │ Id │ Name │ IP Addresses │ Protected Only │ ├─────────────────────|─────────────────────┼──────────────┼─────────────────────┤ │ wispy-haze-26469780 │ wispy-haze-26469780 │ 203.0.113.0 │ false │ │ │ │ 203.0.113.1 │ │ └─────────────────────┴─────────────────────┴──────────────┴─────────────────────┘ ``` To apply an IP allowlist to protected branches only, you can use the `--protected-only` option: ```bash neon ip-allow add 203.0.113.1 --protected-only ``` To reverse that setting, use `--protected-only false`. ```bash neon ip-allow add 203.0.113.1 --protected-only false ``` Tab: API The [Create project](https://api-docs.neon.tech/reference/createproject) and [Update project](https://api-docs.neon.tech/reference/updateproject) methods support **IP Allow** configuration. For example, the following API call configures **IP Allow** for an existing Neon project. Separate multiple entries with commas. Each entry must be quoted. You can set the `"protected_branches_only` option to `true` to apply the allowlist to protected branches only, or `false` to apply it to all branches in your Neon project. ```bash curl -X PATCH \ https://console.neon.tech/api/v2/projects/falling-salad-31638542 \ -H 'accept: application/json' \ -H 'authorization: Bearer $NEON_API_KEY' \ -H 'content-type: application/json' \ -d ' { "project": { "settings": { "allowed_ips": { "protected_branches_only": true, "ips": [ "203.0.113.0", "203.0.113.1" ] } } } } ' | jq ``` #### How to specify IP addresses You can define an allowlist with individual IP addresses, IP ranges, or [CIDR notation](https://neon.com/docs/reference/glossary#cidr-notation). A combination of these options is also permitted. Multiple entries, whether they are the same or of different types, must be separated by a comma. Whitespace is ignored. - **Add individual IP addresses**: You can add individual IP addresses that you want to allow. This is useful for granting access to specific users or devices. This example represents a single IP address: ```text 192.0.2.1 ``` - **Define IP ranges**: For broader access control, you can define IP ranges. This is useful for allowing access from a company network or a range of known IPs. This example range includes all IP addresses from `198.51.100.20` to `198.51.100.50`: ```text 198.51.100.20-198.51.100.50 ``` - **Use CIDR notation**: For more advanced control, you can use [CIDR (Classless Inter-Domain Routing) notation](https://neon.com/docs/reference/glossary#cidr-notation). This is a compact way of defining a range of IPs and is useful for larger networks or subnets. Using CIDR notation can be advantageous when managing access to branches with numerous potential users, such as in a large development team or a company-wide network. This CIDR notation example represents all 256 IP addresses from `203.0.113.0` to `203.0.113.255`. ```text 203.0.113.0/24 ``` - **Use IPv6 addresses**: Neon projects provisioned on AWS also support specifying IPv6 addresses. For example: **Note**: IPv6 is not yet supported for projects provisioned on Azure. ```text 2001:DB8:5432::/48 ``` A combined example using all three options above, specified as a comma-separated list, would appear similar to the following: ```text 192.0.2.1, 198.51.100.20-198.51.100.50, 203.0.113.0/24, 2001:DB8:5432::/48 ``` This list combines individual IP addresses, a range of IP addresses, a CIDR block, and an IPv6 address. It illustrates how different types of IP specifications can be used together in a single allowlist configuration, offering a flexible approach to access control. #### Update an IP Allow configuration You can update your IP Allow configuration via the Neon Console or API as described in [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). Replace the current configuration with the new configuration. For example, if your IP Allow configuration currently allows access from IP address `192.0.2.1`, and you want to extend access to IP address `192.0.2.2`, specify both addresses in your new configuration: `192.0.2.1, 192.0.2.2`. You cannot append values to an existing configuration. You can only replace an existing configuration with a new one. The Neon CLI provides an `ip-allow` command with `add`, `reset`, and `remove` options that you can use to update your IP Allow configuration. For instructions, refer to [Neon CLI commands — ip-allow](https://neon.com/docs/reference/cli-ip-allow). #### Remove an IP Allow configuration To remove an IP configuration entirely to go back to the default "no IP restrictions" (`0.0.0.0`) configuration: Tab: Neon Console 1. Select a project in the Neon Console. 2. On the **Project Dashboard**, select **Settings**. 3. Select **IP Allow**. 4. Clear the **Allowed IP addresses and ranges** field. 5. If applicable, clear the **Restrict IP Access to protected branches only** checkbox. 6. Click **Save changes**. Tab: CLI The [Neon CLI ip-allow command](https://neon.com/docs/reference/cli-ip-allow) supports removing an IP Allow configuration. To do so, specify `--ip-allow reset` without specifying any IP address values: ```bash neon ip-allow reset ``` Tab: API Specify the `ips` option with an empty string. If applicable, also include `"protected_branches_only": false`. ```bash curl -X PATCH \ https://console.neon.tech/api/v2/projects/falling-salad-31638542 \ -H 'accept: application/json' \ -H 'authorization: Bearer $NEON_API_KEY' \ -H 'content-type: application/json' \ -d ' { "project": { "settings": { "allowed_ips": { "protected_branches_only": false, "ips": [] } } } } ' ``` ### Enable the Data API The Data API turns your database tables into a REST API, making it easy to query your data from client applications. When you enable the Data API, it automatically creates `authenticated` and `anonymous` roles and sets up the necessary permissions for secure client-side access. For setup instructions and examples, see the [Data API documentation](https://neon.com/docs/data-api/get-started). ### Enable logical replication Logical replication lets you replicate data changes from Neon to external data services and platforms, including data warehouses, analytical database services, messaging platforms, event-streaming platforms, and external Postgres databases. **Important**: Enabling logical replication modifies the PostgreSQL `wal_level` configuration parameter, changing it from `replica` to `logical` for all databases in your Neon project. Once the `wal_level` setting is changed to `logical`, it cannot be reverted. Enabling logical replication also restarts all computes in your Neon project, meaning that active connections will be dropped and have to reconnect. To enable logical replication in Neon: 1. Select your project in the Neon Console. 2. On the **Project Dashboard**, select **Settings**. 3. Select **Logical replication**. 4. Click **Enable** to enable logical replication. You can verify that logical replication is enabled by running the following query: ```sql SHOW wal_level; wal_level ----------- logical ``` After enabling logical replication, the next steps involve creating publications on your replication source database in Neon and configuring subscriptions on the destination system or service. To get started, refer to our [logical replication guides](https://neon.com/docs/guides/logical-replication-guide). ### Delete a project Deleting a project is a permanent action, which also deletes any computes, branches, databases, and roles that belong to the project. To delete a project: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select the project that you want to delete. 3. Select **Settings**. 4. Select **Delete**. **Important**: If you are any of Neon's paid plans, deleting all your Neon projects won't stop monthly billing. To avoid charges, you also need to downgrade to the Free plan. You can do so from the [Billing](https://console.neon.tech/app/billing#change_plan) page in the Neon Console. ## Manage projects with the Neon API Project actions performed in the Neon Console can also be performed using the Neon API. The following examples demonstrate how to create, view, and delete projects using the Neon API. For other project-related API methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). **Note**: The API examples that follow may not show all of the user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` option specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ### Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the cURL examples shown below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. **Note**: To learn more about the types of API keys you can create — personal, organization, or project-scoped — see [Manage API Keys](https://neon.com/docs/manage/api-keys). ### Create a project with the API The following Neon API method creates a project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createproject). ```http POST /projects ``` The API method appears as follows when specified in a cURL command. The `myproject` name value is a user-specified name for the project. ```bash curl 'https://console.neon.tech/api/v2/projects' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "project": { "name": "myproject" } }' | jq ``` The response includes information about the role, the database, the default branch, and the primary read-write compute that is created with the project. Details: Response body For attribute definitions, find the [Create project](https://api-docs.neon.tech/reference/createproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "cpu_used_sec": 0, "id": "ep-cool-darkness-123456", "platform_id": "aws", "region_id": "aws-us-east-1", "name": "myproject", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [7], "start_time": "06:00", "end_time": "07:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-east-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z", "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca" }, "connection_uris": [ { "connection_uri": "postgresql://alex:AbC123dEf@ep-cool-darkness-123456.c-2.us-east-1.aws.neon.tech/dbname?sslmode=require&channel_binding=require", "connection_parameters": { "database": "dbname", "password": "AbC123dEf", "role": "alex", "host": "ep-cool-darkness-123456.c-2.us-east-1.aws.neon.tech", "pooler_host": "ep-cool-darkness-123456-pooler.c-2.us-east-1.aws.neon.tech" } } ], "roles": [ { "branch_id": "br-gentle-salad-ad7v90qq", "name": "neondb_owner", "password": "npg_Se0ECYqaJ5jA", "protected": false, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z" } ], "databases": [ { "id": 5140981, "branch_id": "br-gentle-salad-ad7v90qq", "name": "neondb", "owner_name": "neondb_owner", "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z" } ], "operations": [ { "id": "cacca1d4-ad0e-46dc-ae82-886ffb96889d", "project_id": "ep-cool-darkness-123456", "branch_id": "br-gentle-salad-ad7v90qq", "action": "create_timeline", "status": "running", "failures_count": 0, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z", "total_duration_ms": 0 }, { "id": "1df43d11-5c07-4de1-9440-ac09d305fdf3", "project_id": "ep-cool-darkness-123456", "branch_id": "br-gentle-salad-ad7v90qq", "endpoint_id": "ep-cool-darkness-123456", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z", "total_duration_ms": 0 } ], "branch": { "id": "br-gentle-salad-ad7v90qq", "project_id": "ep-cool-darkness-123456", "name": "main", "current_state": "init", "pending_state": "ready", "state_changed_at": "2025-08-04T05:15:41Z", "creation_source": "console", "primary": true, "default": true, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z", "init_source": "parent-data" }, "endpoints": [ { "host": "ep-cool-darkness-123456.c-2.us-east-1.aws.neon.tech", "id": "ep-cool-darkness-123456", "project_id": "ep-cool-darkness-123456", "branch_id": "br-gentle-salad-ad7v90qq", "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:15:41Z", "proxy_host": "c-2.us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ] } ``` ### List projects with the API The following Neon API method lists projects for your Neon account. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojects). ```http GET /projects ``` The API method appears as follows when specified in a cURL command: ```bash curl 'https://console.neon.tech/api/v2/projects' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [List projects](https://api-docs.neon.tech/reference/listprojects) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "projects": [ { "id": "frosty-tree-10754091", "platform_id": "aws", "region_id": "aws-ap-southeast-1", "name": "personal_projects", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [4], "start_time": "15:00", "end_time": "16:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "ap-southeast-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "active_time": 1260, "cpu_used_sec": 319, "creation_source": "console", "created_at": "2024-11-08T17:20:01Z", "updated_at": "2025-08-03T01:16:18Z", "synthetic_storage_size": 96929448, "quota_reset_at": "2025-09-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca", "compute_last_active_at": "2025-08-03T01:16:18Z", "history_retention_seconds": 86400 }, { "id": "lingering-grass-54827563", "platform_id": "aws", "region_id": "aws-ap-southeast-1", "name": "brizai", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 2, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [1], "start_time": "16:00", "end_time": "17:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "ap-southeast-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "active_time": 0, "cpu_used_sec": 0, "creation_source": "console", "created_at": "2024-10-28T16:26:49Z", "updated_at": "2025-08-01T00:34:48Z", "synthetic_storage_size": 31082816, "quota_reset_at": "2025-09-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca", "compute_last_active_at": "2025-02-14T09:51:30Z", "history_retention_seconds": 86400 } ], "unavailable_project_ids": [], "pagination": { "cursor": "lingering-grass-54827563" }, "applications": { "frosty-tree-10754091": ["vercel"] }, "integrations": { "frosty-tree-10754091": ["vercel"] } } ``` ### Update a project with the API The following Neon API method updates the specified project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/updateproject). ```http PATCH /projects/{project_id} ``` The API method appears as follows when specified in a cURL command. The `project_id` is a required parameter. The example changes the project `name` to `project1`. ```bash curl -X PATCH 'https://console.neon.tech/api/v2/projects/ep-cool-darkness-123456' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "project": { "name": "project1" } }' ``` Details: Response body For attribute definitions, find the [Update project](https://api-docs.neon.tech/reference/updateproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "project": { "data_storage_bytes_hour": 35697544, "data_transfer_bytes": 13444, "written_data_bytes": 34595496, "compute_time_seconds": 89, "active_time_seconds": 348, "cpu_used_sec": 89, "id": "ep-cool-darkness-123456", "platform_id": "aws", "region_id": "aws-us-east-1", "name": "project1", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [7], "start_time": "06:00", "end_time": "07:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-east-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T05:55:58Z", "synthetic_storage_size": 35697544, "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca", "compute_last_active_at": "2025-08-04T05:15:47Z" }, "operations": [] } ``` ### Delete a project with the API The following Neon API method deletes the specified project. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteproject). ```http DELETE /projects/{project_id} ``` The API method appears as follows when specified in a cURL command. The `project_id` is a required parameter. ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/ep-cool-darkness-123456' \ -H 'accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" ``` Details: Response body For attribute definitions, find the [Delete project](https://api-docs.neon.tech/reference/deleteproject) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "project": { "data_storage_bytes_hour": 35697544, "data_transfer_bytes": 13444, "written_data_bytes": 34595496, "compute_time_seconds": 89, "active_time_seconds": 348, "cpu_used_sec": 89, "id": "ep-cool-darkness-123456", "platform_id": "aws", "region_id": "aws-us-east-1", "name": "project2", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": { "ips": [], "protected_branches_only": false }, "enable_logical_replication": false, "maintenance_window": { "weekdays": [7], "start_time": "06:00", "end_time": "07:00" }, "block_public_connections": false, "block_vpc_connections": false, "hipaa": false }, "pg_version": 17, "proxy_host": "c-2.us-east-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "creation_source": "console", "history_retention_seconds": 86400, "created_at": "2025-08-04T05:15:41Z", "updated_at": "2025-08-04T06:10:55Z", "synthetic_storage_size": 35697544, "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca", "compute_last_active_at": "2025-08-04T05:15:47Z" } } ``` --- # Source: https://neon.com/llms/manage-roles.txt # Manage roles > The "Manage roles" documentation outlines the procedures for creating, modifying, and deleting user roles within the Neon database, facilitating precise access control and user management. ## Source - [Manage roles HTML](https://neon.com/docs/manage/roles): The original HTML version of this documentation In Neon, roles are Postgres roles. Each Neon project is created with a Postgres role that is named for your database. For example, if your database is named `neondb`, the project is created with a role named `neondb_owner`. This role owns the database that is created in your Neon project's default branch. Your Postgres role and roles created in the Neon Console, API, and CLI are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. Roles created with SQL from clients like [psql](https://neon.com/docs/connect/query-with-psql-editor), [pgAdmin](https://www.pgadmin.org/), or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) are only granted the basic [public schema privileges](https://neon.com/docs/manage/database-access#public-schema-privileges) granted to newly created roles in a standalone Postgres installation. These users must be selectively granted permissions for each database object. For more information, see [Manage database access](https://neon.com/docs/manage/database-access). **Note**: Neon is a managed Postgres service, so you cannot access the host operating system, and you can't connect using the Postgres `superuser` account like you can in a standalone Postgres installation. You can create roles in a project's default branch or child branches. Neon enforces a limit of 500 roles per branch. In Neon, roles belong to a branch, which could be your production branch or a child branch. When you create a child branch, roles in the parent branch are duplicated in the child branch. For example, if role `alex` exists in the parent branch, role `alex` is copied to the child branch when the child branch is created. The only time this does not occur is when you create a branch that only includes data up to a particular point in time. If the role was created in the parent branch after that point in time, it is not duplicated in the child branch. Neon supports creating and managing roles from the following interfaces: - [Neon Console](https://neon.com/docs/manage/roles#manage-roles-in-the-neon-console) - [Neon CLI](https://neon.com/docs/manage/roles#manage-roles-with-the-neon-cli) - [Neon API](https://neon.com/docs/manage/roles#manage-roles-with-the-neon-api) - [SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql) ## The neon_superuser role Roles created in the Neon Console, CLI, or API, including the role created with a Neon project, are granted membership in the `neon_superuser` role. Users cannot login as `neon_superuser`, but they inherit the privileges assigned to this role. The privileges and predefined role memberships granted to `neon_superuser` include: - `CREATEDB`: Provides the ability to create databases. - `CREATEROLE`: Provides the ability to create new roles (which also means it can alter and drop roles). - `BYPASSRLS`: Provides the ability to bypass row-level security (RLS) policies. This attribute is only included in `neon_superuser` roles in projects created after the [August 15, 2023 release](https://neon.com/docs/changelog/2023-08-15-storage-and-compute). - `NOLOGIN`: The role cannot be used to log in to the Postgres server. Neon is a managed Postgres service, so you cannot access the host operating system directly. - `pg_read_all_data`: A predefined Postgres role provides the ability to read all data (tables, views, sequences), as if having `SELECT` rights on those objects, and `USAGE` rights on all schemas. - `pg_write_all_data`: A predefined Postgres role that provides the ability to write all data (tables, views, sequences), as if having `INSERT`, `UPDATE`, and `DELETE` rights on those objects, and `USAGE` rights on all schemas. - `REPLICATION`: Provides the ability to connect to a Postgres server in replication mode and create or drop replication slots. - `pg_create_subscription`: A predefined Postgres role that lets users with `CREATE` permission on the database issue `CREATE SUBSCRIPTION`. The `pg_create_subscription` role is only available as of Postgres 16. The `neon_superuser` role in Postgres 14 and 15 can issue `CREATE SUBSCRIPTION` with only `CREATE` permission on the database. - `pg_monitor`: A predefined Postgres role that provides read/execute privileges on various Postgres monitoring views and functions. The `neon_superuser` role also has `WITH ADMIN` on the `pg_monitor` role, which enables granting the `pg_monitor` to other Postgres roles. - `EXECUTE` privilege on the `pg_stat_statements_reset()` function that is part of the `pg_stat_statements` extension. This privilege was introduced with the January 12, 2024 release. If you installed the `pg_stat_statements` extension before this release, drop and recreate the `pg_stat_statements` extension to enable this privilege. See [Install an extension](https://neon.com/docs/extensions/pg-extensions#install-an-extension). - `pg_signal_backend`: The `neon_superuser` role is granted the `pg_signal_backend` privilege, which allows it to cancel (terminate) backend sessions belonging to roles that are not members of `neon_superuser`. The `WITH ADMIN OPTION` allows `neon_superuser` to grant the `pg_signal_backend` role to other users/roles. - `GRANT ALL ON TABLES` and `WITH GRANT OPTION` on the `public` schema. - `GRANT ALL ON SEQUENCES` and `WITH GRANT OPTION` on the `public` schema. - `CREATE EVENT TRIGGER`, `ALTER EVENT TRIGGER`, `DROP EVENT TRIGGER`. The `ALTER EVENT TRIGGER` command does not allow changing the function associated with the event trigger. You can think of roles with `neon_superuser` privileges as administrator roles. If you require roles with limited privileges, such as a read-only role, you can create those roles from an SQL client. For more information, see [Manage database access](https://neon.com/docs/manage/database-access). **Note**: Creating a database with the `neon_superuser` role, altering a database to have owner `neon_superuser`, and altering the `neon_superuser role` itself are _not_ permitted. This `NOLOGIN` role is not intended to be used directly or modified. ## Manage roles in the Neon Console This section describes how to create, view, and delete roles in the Neon Console. All roles created in the Neon Console are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. ### Create a role To create a role: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to create the role. 5. On the **Roles & Databases** tab, click **Add role**. 6. In the role creation modal, specify a role name. The branch is pre-selected. 7. Click **Create**. The role is created and you are provided with the password for the role. **Note**: Role names cannot exceed 63 characters, and some names are not permitted. See [Reserved role names](https://neon.com/docs/manage/roles#reserved-role-names). ### Delete a role Deleting a role is a permanent action that cannot be undone, and you cannot delete a role that owns a database. The database must be deleted before deleting the role that owns the database. To delete a role: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the branch where you want to delete a role. 5. On the **Roles & Databases** tab, select **Delete role** from the role menu. 6. On the confirmation modal, click **Delete**. ### Reset a password To reset a role's password: 1. Navigate to the [Neon Console](https://console.neon.tech). 2. Select a project. 3. Select **Branches**. 4. Select the role's branch. 5. On the **Roles & Databases** tab, select **Reset password** from the role menu. 6. On the **Reset password** modal, click **Reset**. A reset password modal is displayed with your new password. **Note**: Resetting a password in the Neon Console resets the password to a generated value. To set your own password value, you can reset the password using the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or an SQL client like [psql](https://neon.com/docs/connect/query-with-psql-editor) with the following syntax: ```sql ALTER USER user_name WITH PASSWORD 'new_password'; ``` For password requirements, see [Manage roles with SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql). ## Manage roles with the Neon CLI The Neon CLI supports creating and deleting roles. For instructions, see [Neon CLI commands — roles](https://neon.com/docs/reference/cli-roles). Roles created with the Neon CLI are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. ## Manage roles with the Neon API Role actions performed in the Neon Console can also be performed using Neon API role methods. The following examples demonstrate how to create, view, reset passwords for, and delete roles using the Neon API. For other role-related methods, refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Roles created with the Neon API are granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role. In Neon, roles belong to branches, which means that when you create a role, it is created in a branch. Role-related requests are therefore performed using branch API methods. **Note**: The API examples that follow may not show all user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to method's request body schema in the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The `jq` option specified in each example is an optional third-party tool that formats the `JSON` response, making it easier to read. For information about this utility, see [jq](https://stedolan.github.io/jq/). ### Prerequisites A Neon API request requires an API key. For information about obtaining an API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). In the cURL examples shown below, `$NEON_API_KEY` is specified in place of an actual API key, which you must provide when making a Neon API request. **Note**: To learn more about the types of API keys you can create — personal, organization, or project-scoped — see [Manage API Keys](https://neon.com/docs/manage/api-keys). ### Create a role with the API The following Neon API method creates a role. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/createprojectbranchrole). ```http POST /projects/{project_id}/branches/{branch_id}/roles ``` **Note**: Role names cannot exceed 63 characters, and some role names are not permitted. See [Reserved role names](https://neon.com/docs/manage/roles#reserved-role-names). The API method appears as follows when specified in a cURL command. The `project_id` and `branch_id` are required parameters, and the role `name` is a required attribute. The length of a role name is limited to 63 bytes. ```bash curl 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ -d '{ "role": { "name": "alex" } }' | jq ``` Details: Response body For attribute definitions, find the [Create role](https://api-docs.neon.tech/reference/createprojectbranchrole) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "role": { "branch_id": "br-morning-meadow-afu2s1jl", "name": "alex", "password": "npg_A9xYoejTz6iQ", "protected": false, "created_at": "2025-08-04T07:47:05Z", "updated_at": "2025-08-04T07:47:05Z" }, "operations": [ { "id": "9c61fc28-c89e-4b25-ad5c-8777742e66a3", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T07:47:05Z", "updated_at": "2025-08-04T07:47:05Z", "total_duration_ms": 0 } ] } ``` ### List roles with the API The following Neon API method lists roles for the specified branch. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/listprojectbranchroles). ```http GET /projects/{project_id}/branches/{branch_id}/roles ``` The API method appears as follows when specified in a cURL command. The `project_id` and `branch_id` are required parameters. ```bash curl 'https://console.neon.tech/api/v2/projects/hidden-cell-763301/branches/br-blue-tooth-671580/roles' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [List roles](https://api-docs.neon.tech/reference/listprojectbranchroles) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "roles": [ { "branch_id": "br-blue-tooth-671580", "name": "daniel", "protected": false, "created_at": "2023-07-09T17:01:34Z", "updated_at": "2023-07-09T17:01:34Z" }, { "branch_id": "br-blue-tooth-671580", "name": "alex", "protected": false, "created_at": "2023-07-13T06:42:55Z", "updated_at": "2023-07-13T14:48:29Z" } ] } ``` ### Reset a password with the API The following Neon API method resets the password for the specified role. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/resetprojectbranchrolepassword). ```http POST /projects/{project_id}/branches/{branch_id}/roles/{role_name}/reset_password ``` The API method appears as follows when specified in a cURL command. The `project_id`, `branch_id`, and `role_name` are required parameters. ```bash curl -X 'POST' \ 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/roles/alex/reset_password' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [Reset role password](https://api-docs.neon.tech/reference/resetprojectbranchrolepassword) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "role": { "branch_id": "br-morning-meadow-afu2s1jl", "name": "alex", "password": "npg_iDKnwMW7bUg5", "protected": false, "created_at": "2025-08-04T07:47:05Z", "updated_at": "2025-08-04T07:51:10Z" }, "operations": [ { "id": "23b3db33-d36a-45bf-9fda-0e73b5b272e5", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T07:51:10Z", "updated_at": "2025-08-04T07:51:10Z", "total_duration_ms": 0 } ] } ``` ### Delete a role with the API The following Neon API method deletes the specified role. To view the API documentation for this method, refer to the [Neon API reference](https://api-docs.neon.tech/reference/deleteprojectbranchrole). ```http DELETE /projects/{project_id}/branches/{branch_id}/roles/{role_name} ``` The API method appears as follows when specified in a cURL command. The `project_id`, `branch_id`, and `role_name` are required parameters. ```bash curl -X 'DELETE' \ 'https://console.neon.tech/api/v2/projects/dry-heart-13671059/branches/br-morning-meadow-afu2s1jl/roles/alex' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" | jq ``` Details: Response body For attribute definitions, find the [Delete role](https://api-docs.neon.tech/reference/deleteprojectbranchrole) endpoint in the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). Definitions are provided in the **Responses** section. ```json { "role": { "branch_id": "br-morning-meadow-afu2s1jl", "name": "alex", "protected": false, "created_at": "2025-08-04T07:47:05Z", "updated_at": "2025-08-04T07:51:10Z" }, "operations": [ { "id": "722b9f9b-c50e-424c-845e-78b38151b82f", "project_id": "dry-heart-13671059", "branch_id": "br-morning-meadow-afu2s1jl", "endpoint_id": "ep-holy-heart-afbmgcfx", "action": "apply_config", "status": "running", "failures_count": 0, "created_at": "2025-08-04T07:53:22Z", "updated_at": "2025-08-04T07:53:22Z", "total_duration_ms": 0 } ] } ``` ## Manage roles with SQL Roles created with SQL have the same basic `public` schema privileges as newly created roles in a standalone Postgres installation. These roles are not granted membership in the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role like roles created with the Neon Console, CLI, or API. You must grant these roles the privileges you want them to have. To create a role with SQL, issue a `CREATE ROLE` statement from a client such as [psql](https://neon.com/docs/connect/query-with-psql-editor), [pgAdmin](https://www.pgadmin.org/), or the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor). ```sql CREATE ROLE WITH LOGIN PASSWORD 'password'; ``` - `WITH LOGIN` means that the role will have a login privilege, required for the role to log in to your Neon Postgres instance. If the role is used only for privilege management, the `WITH LOGIN` privilege is unnecessary. - A password must have a minimum entropy of 60 bits. **Info**: To create a password with 60 bits of entropy, you can follow these password composition guidelines: - **Length**: The password should consist of at least 12 characters. - **Character diversity**: To enhance complexity, passwords should include a variety of character types, specifically: - Lowercase letters (a-z) - Uppercase letters (A-Z) - Numbers (0-9) - Special symbols (e.g., !@#$%^&*) - **Avoid predictability**: To maintain a high level of unpredictability, do not use: - Sequential patterns (such as '1234', 'abcd', 'qwerty') - Common words or phrases - Any words found in a dictionary - **Avoid character repetition**: To maximize randomness, do not use the same character more than twice consecutively. Example password: `T3sting!23Ab` (DO NOT USE THIS EXAMPLE PASSWORD) Passwords must be supplied in plain text but are encrypted when stored. Hashed passwords are not supported. The guidelines should help you create a password with approximately 60 bits of entropy. However, depending on the exact characters used, the actual entropy might vary slightly. Always aim for a longer and more complex password if you're uncertain. It's also recommended to use a trusted password manager to create and store your complex passwords safely. Neon also supports the `NOLOGIN` option: `CREATE ROLE role_name NOLOGIN;` This allows you to define roles that cannot authenticate but can be granted privileges. For role creation and access management examples, refer to the [Manage database access](https://neon.com/docs/manage/database-access) guide. ## Creating NOLOGIN roles Neon supports creating Postgres roles with the `NOLOGIN` attribute. This allows you to define roles that cannot authenticate but can be granted privileges. ```sql CREATE ROLE my_role NOLOGIN; ``` Roles with `NOLOGIN` are commonly used for permission management. The Neon API and CLI also support creating `NOLOGIN` roles: - The Neon API [Create role](https://api-docs.neon.tech/reference/createprojectbranchrole) endpoint supports a `no_login` attribute. - The Neon CLI [`neon roles create`](https://neon.com/docs/reference/cli-roles#create) command supports a `--no-login` option. ## Reserved role names The following names are reserved and cannot be given to a role: - Any name starting with `pg_` - `neon_superuser` - `cloud_admin` - `zenith_admin` - `public` - `none` --- # Source: https://neon.com/llms/manage-slack-app.txt # Neon App for Slack > The Neon App for Slack documentation outlines the installation and configuration process for integrating Neon's database management capabilities directly within Slack, enabling users to monitor and manage their databases through Slack commands. ## Source - [Neon App for Slack HTML](https://neon.com/docs/manage/slack-app): The original HTML version of this documentation The Neon App for Slack allows you to monitor your Neon usage and manage organization membership directly from Slack. Get quick access to project information and resource usage metrics without leaving your workspace. The app is available to all Neon users on both free and paid plans — check out our [pricing page](https://neon.com/pricing) for more details. ## Setup ## Install the Neon App for Slack Click the **Add to Slack** button and follow the prompts. ## Authenticate with Neon The first thing you need to do is authorize — open a DM with your new app and type `/neon auth`. Follow the login flow that opens in your browser, and you're in. Once authenticated, you're ready to use all available commands. ## Available commands | **Command** | **Description** | | ------------------- | ---------------------------------------------------------------- | | `/neon auth` | Connect Slack to your Neon account | | `/neon projects` | List your Neon projects | | `/neon usage` | Show overall resource usage for your account | | `/neon help` | List all available commands | | `/neon status` | Check the current status of Neon's cloud service | | `/neon feedback` | Share your thoughts and suggestions about the Neon App for Slack | | `/neon invite user` | Invite users to your organization | | `/neon subscribe` | Subscribe to your Neon account updates | | `/neon unsubscribe` | Unsubscribe from your Neon account updates | | `/neon disconnect` | Disconnect your Neon account and subscribed channels | ## Example workflows ### Check your Neon usage statistics Open a DM with the Neon App for Slack and run the following command to instantly view your current data transfer, compute time, and storage usage across all projects: ``` /neon usage ``` ### Usage notifications You can receive automated notifications about your Neon usage in any channel (public or private). First, subscribe to notifications using the steps in the section below. Once subscribed, the channel will receive automatic notifications when you approach or reach your resource limits for compute hours, storage, or data transfer. ### Subscribe to notifications in a channel To receive Neon notifications in a Slack channel: 1. Go to any channel (public or private) where you want to receive notifications 2. Run `/neon subscribe` - you'll be prompted to run `/invite @Neon (Beta)` if needed 3. After inviting the bot, run `/neon subscribe` again Once subscribed, the channel will start receiving important Neon usage notifications. To stop receiving notifications, use the `/neon unsubscribe` command in the same channel. Use `/neon disconnect` to remove your Neon account connection and unsubscribe from all channels, while keeping the app installed for future use. ## Data deletion Neon respects your right to request deletion of your personal data at any time. To do so, email us at `privacy@neon.tech` or submit a support ticket via the Neon Console. Once we verify your identity, we will delete your data within 30 days, unless a legal or contractual obligation requires us to retain it. For more details, see our [Privacy Policy](https://neon.com/privacy-policy). ## Support If you encounter any issues with the Neon App for Slack, please open a support ticket in the [Neon Console](https://console.neon.tech/app/projects?modal=support). Free plan users can get help through our [Discord community](https://discord.gg/92vNTzKDGp). For more details about our support options, see our [Support documentation](https://neon.com/docs/introduction/support). ## FAQs Details: **What can I do with the Neon App for Slack?** The Neon App for Slack allows you to: - View project information and resource usage - Monitor system status - Manage notifications in channels - Invite users to your organization Details: **Does this app allow me to modify databases or projects?** No, the Neon App for Slack is primarily for viewing usage details and managing organization membership, not for direct database management. Details: **Can I control which notifications I receive?** You can control where notifications are sent using the `/neon subscribe` and `/neon unsubscribe` commands in any channel. However, you cannot customize which types of notifications you receive — all subscribed channels will receive all important Neon updates and usage alerts. --- # Source: https://neon.com/llms/manage-updates.txt # Updates > The "Updates" document outlines the procedures and guidelines for managing and applying updates within the Neon database environment, ensuring users maintain optimal performance and security. ## Source - [Updates HTML](https://neon.com/docs/manage/updates): The original HTML version of this documentation To keep your Neon [computes](https://neon.com/docs/reference/glossary#compute) and Postgres instances up to date with the latest patches and features, Neon applies updates to your project's computes. We notify you of updates in advance so that you can plan for them if necessary. On Neon's paid plans, you can select an update window — a specific day and hour for updates. Neon briefly restarts a compute to apply an update. The entire process takes just a few seconds, minimizing any potential disruption. ## What updates are included? Updates to Neon computes may include some or all of the following: - Postgres minor version upgrades, typically released quarterly - Security patches and updates - Operating system updates - Neon features and enhancements - Updates to other tools and components included in Neon compute images Neon compute updates do not include [Neon platform maintenance](https://neon.com/docs/manage/platform-maintenance). ## How often are updates applied? Updates are typically released weekly but may occur more or less frequently, as needed. Neon applies updates to computes based on the following rules: - Computes that have been active for 30 days or more receive updates. - Computes that are restarted receive available updates immediately. - Computes in a transition state (e.g., shutting down or restarting) at the time of an update are not updated. - Computes larger than 8 CU or that can scale past 8 CU are not updated automatically. See [Updating large computes](https://neon.com/docs/manage/updates#updating-large-computes). If a compute is excluded from an update, Neon will apply the missed update with the next update, assuming the compute meets the update criteria mentioned above. **Important** updates outside of scheduled update windows: Please be aware that Neon must occasionally perform essential **platform maintenance** outside the scheduled updates performed on Neon computes. This means that you may experience brief disruptions from time to time. To learn more, see [Platform maintenance](https://neon.com/docs/manage/platform-maintenance). ## Updates on the Free plan On the **Free plan**, updates are scheduled and applied automatically. You can check your project's settings for updates. We'll post a notice there at least **1 day** ahead of a planned update, letting you know when it's coming. To view planned updates: 1. Go to the Neon project dashboard. 2. Select **Settings** > **Updates**. If you want to apply an update ahead of the scheduled date, see [Applying updates ahead of schedule](https://neon.com/docs/manage/updates#applying-updates-ahead-of-schedule). ## Updates on paid plans On Neon's paid plans, you can set a preferred update window by specifying the day and hour. Updates will be applied within this window, letting you plan for the required compute restart. You can specify an update window in your Neon project's settings or using the Neon API. Tab: Neon Console In the Neon Console: 1. Go to the Neon project dashboard. 2. Select **Settings** > **Updates**. 3. Choose a day of the week and an hour. Updates will occur within this time window and take only a few seconds. You can check your project's settings for upcoming updates. We'll post a notice there at least **7 days** ahead of a planned update, letting you know when it's coming. > If you're a Scale plan customer, you will also receive an **email notification** 7 days in advance of a planned update. Tab: API On Neon paid plans, the [Create project](https://api-docs.neon.tech/reference/createproject) and [Update project](https://api-docs.neon.tech/reference/updateproject) APIs let you define an update window using the `maintenance_window` object, as shown in the `Update project` example below. - The `weekdays` parameter accepts an integer (`1` for Monday, `2` for Tuesday, and so on) or an array of integers to specify multiple weekdays. - The `start_time` and `end_time` values must be in UTC (`HH:MM` format) and at least one hour apart. Shorter intervals are not supported. Both times must fall on the same day. For example, (`22:00`, `23:00`) and (`23:00`, `00:00`) are valid settings, but (`22:00`, `03:00`) is not, as it would span multiple days. ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/fragrant-mode-99795914 \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API' \ --header 'content-type: application/json' \ --data ' { "project": { "settings": { "maintenance_window": { "weekdays": [ 7 ], "start_time": "01:00", "end_time": "02:00" } } } } ' ``` ## Check for updates using the Neon API You can retrieve your update window and check for planned updates using the [Retrieve project details](https://api-docs.neon.tech/reference/getproject) endpoint. To get your project details, send the following request, replacing `` with your Neon project ID, and `$NEON_API_KEY` with your [Neon API key](https://neon.com/docs/manage/api-keys): ```bash curl --request GET \ --url https://console.neon.tech/api/v2/projects/ \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' ``` In the response, locate the `maintenance_window` field. It specifies the selected weekday and hour for updates. For Free plan accounts, the update window is set by Neon. Paid plan accounts can [choose a preferred update window](https://neon.com/docs/manage/updates#updates-on-paid-plans). The `weekdays` value is a number from 1 to 7, representing the day of the week. ```json { ... "settings": { "maintenance_window": { "weekdays": [5], "start_time": "07:00", "end_time": "08:00" }, } "maintenance_scheduled_for": "2025-02-07T07:00" ... } ``` If there's a planned update, you'll also find a `maintenance_scheduled_for` field in the response body. This value matches the `start_time` in your `maintenance_window` but is formatted as a timestamp. If the `maintenance_scheduled_for` field in not present in the response, this means there is no planned update at this time. ## Applying updates ahead of schedule Computes receive available updates immediately upon restart. For example, if Neon notifies you about an upcoming update, you can apply it right away by restarting the compute. However, the notification won't be cleared in this case. When the planned update time arrives, no further action will be taken since the compute is already updated. If a compute regularly scales to zero, it will receive updates when it starts up again. In such cases, you may not need to pay much attention to update notifications, as updates will be applied naturally through your compute's stop/start cycles. For compute restart instructions, see [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). ## Updating large computes Computes larger than 8 CU or set to scale beyond 8 CU are not updated automatically (_scheduled updates do not apply_). To apply updates, you'll need to restart them manually. A restart may occur automatically due to [scale to zero](https://neon.com/docs/introduction/scale-to-zero), but if scale to zero is disabled or your compute runs continuously, please plan for manual restarts. Neon typically releases compute updates weekly, so we recommend scheduling weekly compute restarts. For restart instructions, see [Restart a compute](https://neon.com/docs/manage/computes#restart-a-compute). ## Handling connection disruptions during compute updates Most Postgres connection drivers include built-in retry mechanisms that automatically handle short-lived connection interruptions. This means that for most applications, a brief restart should result in minimal disruption, as the driver will reconnect automatically. However, if your application has strict availability requirements, you may want to ensure that your connection settings are configured to allow for retries. Check your driver's documentation for options like connection timeouts, retry intervals, and connection pooling strategies. Your configuration should account for the few seconds it takes to apply updates to your Neon compute. For related information, see [Build connection timeout handling into your application](https://neon.com/docs/connect/connection-latency#build-connection-timeout-handling-into-your-application). If your application or integration uses the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) or [SDKs](https://neon.com/docs/reference/sdk) that wrap the Neon API, we recommend building in the same type of retry logic. --- # Source: https://neon.com/llms/manage-user-permissions.txt # User Permissions > The "User Permissions" document outlines the procedures for managing and configuring user access levels within Neon, detailing how to assign roles and set permissions to control database access and operations. ## Source - [User Permissions HTML](https://neon.com/docs/manage/user-permissions): The original HTML version of this documentation In Neon, roles determine what actions you can take within an organization and its projects. This page provides a detailed breakdown of permissions for each role: **Admin**, **Member**, and **Collaborator**. For an overview of organizations, see the [Organizations](https://neon.com/docs/manage/organizations) page. ## Role descriptions - **Admin** — Full control over the organization and all its projects. Can manage permissions, billing, members, and organization settings. Only Admins can delete organization projects. - **Member** — Access to all organization projects and can perform most project operations, but cannot modify organization settings or delete projects. - **Collaborator** — External users invited to specific projects. Collaborators have no organization-level access, but can work on projects they've been invited to. ## Organization management The following table shows what each role can do at the organization level: | Action | Admin | Member | Collaborator | | ---------------------------------------- | :---: | :----: | :----------: | | Invite organization members | ✅ | ❌ | ❌ | | Set organization permissions | ✅ | ❌ | ❌ | | Manage organization billing | ✅ | ❌ | ❌ | | Rename organization | ✅ | ❌ | ❌ | | Delete organization | ✅ | ❌ | ❌ | | Enable organization Early Access Program | ✅ | ❌ | ❌ | ## Project management The following table shows what each role can do at the project level: | Action | Admin | Member | Collaborator | | ---------------------------- | :---: | :----: | :----------: | | Create new projects | ✅ | ✅ | ❌ | | Rename projects | ✅ | ✅ | ✅ | | Transfer projects into org | ✅ | ✅ | ❌ | | Transfer projects out of org | ✅ | ❌ | ❌ | | Delete projects | ✅ | ❌ | ❌ | | Manage project databases | ✅ | ✅ | ✅ | | Configure project computes | ✅ | ✅ | ✅ | | Manage project roles | ✅ | ✅ | ✅ | | Invite/remove collaborators | ✅ | ✅ | ✅ | ## Integration management The following table shows what each role can do regarding integrations: | Action | Admin | Member | Collaborator | | ---------------------------------------------------- | :---: | :----: | :----------: | | Install GitHub integration | ✅ | ❌ | ❌ | | Install Neon Auth | ✅ | ❌ | ❌ | | Install the Vercel-managed Neon integration\* | ✅ | ❌ | ❌ | | Connect project to GitHub integration | ✅ | ✅ | ❌ | | Connect project to Neon-managed Vercel integration\* | ✅ | ✅ | ❌ | \*Neon's Vercel-managed integration is managed entirely in Vercel and uses Vercel's permission system. For the Neon-managed Vercel integration, projects must first be made available in Vercel before they can be connected to Neon. ## Notes and limitations - **Branch management** — All users are currently able to manage [protected branches](https://neon.com/docs/guides/protected-branches), regardless of their role or permission level. Granular permissions for this feature are not yet implemented. - **Permissions and roles** — The current permissions system may not meet all needs for granular control. Share your feedback and requirements for more detailed permissions settings via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form or our [Discord feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042). --- # Source: https://neon.com/llms/neon-auth-api.txt # Manage Neon Auth using the API > The document "Manage Neon Auth using the API" details how to utilize Neon's API for managing authentication processes, including creating, updating, and deleting authentication tokens and configuring access permissions within the Neon platform. ## Source - [Manage Neon Auth using the API HTML](https://neon.com/docs/neon-auth/api): The original HTML version of this documentation **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Related docs: - [Get started](https://neon.com/docs/guides/neon-auth) - [Tutorial](https://neon.com/docs/guides/neon-auth-demo) - [How it works](https://neon.com/docs/guides/neon-auth-how-it-works) Sample project: - [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) Learn how to manage your Neon Auth integration using the Neon API. Create a new integration, generate SDK keys, add users, and claim ownership of your Neon-managed auth project to your auth provider. ## Prerequisites - A Neon API key (see [Create an API Key](https://neon.com/docs/manage/api-keys#create-an-organization-api-key)) - A Neon project ## Common parameters Several endpoints require these parameters: - `project_id`: Your Neon project ID. You can find it in the Neon Console on the **Settings** page, or use the [List Projects endpoint](https://api-docs.neon.tech/reference/listprojects). - `auth_provider`: The authentication provider you're using. Currently supported providers: - `stack`: Stack Auth integration ## Create integration Creates a Neon-managed authentication project for your database (currently supporting Stack Auth). This endpoint performs the same action as using Quick Start in the Console, automatically provisioning and configuring a new auth provider project that Neon manages for you. **Note**: To create an integration, you'll need: - Your production branch ID. Get it from the Neon Console on the **Branches** page, or use the [List Branches endpoint](https://api-docs.neon.tech/reference/listprojectbranches) (look for `"default": true`) - Your database name and role name. Get them by clicking on the **Connect** button on your **Project Dashboard** in the Neon Console, or use the [List Databases endpoint](https://api-docs.neon.tech/reference/listprojectbranches) Required parameters: - `project_id`: Your Neon project ID - `branch_id`: Your project's production branch ID - `database_name`: Name of your database (defaults to `"neondb"`) - `role_name`: Database role for authenticated users (defaults to `"neondb_owner"`) ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/create' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "auth_provider": "stack", "project_id": "project-id", "branch_id": "br-example-123", "database_name": "neondb", "role_name": "neondb_owner" }' | jq ``` Example response: ```json { "auth_provider": "stack", "auth_provider_project_id": "proj-example-123", "pub_client_key": "pck_example123", "secret_server_key": "ssk_example123", "jwks_url": "https://api.stack-auth.com/api/v1/projects/proj-example-123/.well-known/jwks.json", "schema_name": "neon_auth", "table_name": "users_sync" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/createneonauthintegration) ## List integrations Lists all active auth provider integrations for your project. ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/integrations' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` Example response: ```json { "data": [ { "auth_provider": "stack", "auth_provider_project_id": "proj-example-123", "branch_id": "br-example-123", "db_name": "neondb", "created_at": "2024-03-19T12:00:00Z", "owned_by": "neon", "jwks_url": "https://api.stack-auth.com/api/v1/projects/proj-example-123/.well-known/jwks.json" } ] } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/listneonauthintegrations) ## Generate SDK keys Generates SDK keys for your auth provider integration. These keys are used to set up your frontend and backend SDKs. Required parameters: - `project_id`: Your Neon project ID - `auth_provider`: The authentication provider (currently `"stack"`) ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/keys' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "project_id": "project-id", "auth_provider": "stack" }' | jq ``` Example response: ```json { "auth_provider": "stack", "auth_provider_project_id": "project-id-123", "pub_client_key": "pck_example...", "secret_server_key": "ssk_example...", "jwks_url": "https://api.stack-auth.com/api/v1/projects/project-id-123/.well-known/jwks.json", "schema_name": "neon_auth", "table_name": "users_sync" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/createneonauthprovidersdkkeys) ## Create user Creates a new user in your auth provider's system. Required parameters: - `project_id`: Your Neon project ID - `auth_provider`: The authentication provider (currently `"stack"`) - `email`: User's email address Optional parameters: - `name`: User's display name (1-255 characters) ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/user' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "project_id": "project-id", "auth_provider": "stack", "email": "user@example.com", "name": "Example User" }' | jq ``` Example response: ```json { "id": "user-id-123" } ``` You can verify the user was synchronized to your database by connecting to your project and querying the `neon_auth.users_sync` table: ```bash psql postgres://[user]:[password]@[hostname]/[database] ``` ```sql SELECT id, email, name, created_at FROM neon_auth.users_sync; ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/createneonauthnewuser) ## Delete user Deletes an existing user from Neon Auth. Required parameters: - `project_id`: Your Neon project ID - `auth_user_id`: The user ID to delete ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/users/{auth_user_id}' \ --header 'authorization: Bearer $NEON_API_KEY' ``` A successful DELETE returns no response body (`204 No Content`). [Try in API Reference ↗](https://api-docs.neon.tech/reference/deleteneonauthuser) ## Transfer to your auth provider Transfer ownership of your Neon-managed auth project to your own auth provider account. This is a two-step process: 1. Request a transfer URL: ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/transfer_ownership' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "project_id": "project-id", "auth_provider": "stack" }' | jq ``` Example response: ```json { "url": "https://app.stack-auth.com/integrations/neon/projects/transfer/confirm?code=example123" } ``` 2. Open the returned URL in a browser to complete the transfer. You'll be asked to confirm which Stack Auth account should receive ownership of the project. **Note**: After the transfer, you'll still be able to access your project from the Neon dashboard, but you'll also have direct access from the Stack Auth dashboard. ## Delete integration Removes an integration with a specific auth provider. ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/integration/{auth_provider}' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/deleteneonauthintegration) ## Manage OAuth providers via API You can programmatically manage OAuth providers for your Neon Auth project using the Neon API. The following endpoints allow you to add, list, update, and delete OAuth providers for a project. ### List OAuth providers Lists the OAuth providers for the specified project. Required parameters: - `project_id` (string): The Neon project ID ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/oauth_providers' \ --header 'authorization: Bearer $NEON_API_KEY' ``` Example response: ```json { "providers": [ { "id": "github", "type": "shared" }, { "id": "google", "type": "shared" } ] } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/listneonauthoauthproviders) ### Add an OAuth provider Adds an OAuth provider to the specified project. Required parameters: - `project_id` (string): The Neon project ID - `id` (string): The provider ID (e.g., `google`, `github`, `microsoft`) Optional parameters: - `client_id` (string): The OAuth client ID - `client_secret` (string): The OAuth client secret > If you do not provide `client_id` and `client_secret`, Neon will use shared keys for the provider. For production environments, you should always provide your own `client_id` and `client_secret` to ensure security and control. See [Production OAuth setup best practices](https://neon.com/docs/neon-auth/best-practices#production-oauth-setup) for details. ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/oauth_providers' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "id": "google", "client_id": "your-client-id", "client_secret": "your-client-secret", }' ``` Example response: ```json { "id": "google", "type": "standard", "client_id": "your-client-id", "client_secret": "your-client-secret" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/addneonauthoauthprovider) ### Update an OAuth provider Updates an OAuth provider for the specified project. Required parameters: - `project_id` (string): The Neon project ID - `oauth_provider_id` (string): The OAuth provider ID (e.g., `google`, `github`, `microsoft`) Optional parameters (request body): - `client_id` (string): The new OAuth client ID - `client_secret` (string): The new OAuth client secret ```bash curl --request PATCH \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/oauth_providers/google' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "client_id": "new-client-id", "client_secret": "new-client-secret" }' ``` Example response: ```json { "id": "google", "type": "standard", "client_id": "new-client-id", "client_secret": "new-client-secret" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/updateneonauthoauthprovider) ### Delete an OAuth provider Deletes an OAuth provider from the specified project. Required parameters: - `project_id` (string): The Neon project ID - `oauth_provider_id` (string): The OAuth provider ID (e.g., `google`, `github`, `microsoft`) ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/oauth_providers/google' \ --header 'authorization: Bearer $NEON_API_KEY' ``` A successful DELETE returns no response body (`204 No Content`). You can use the GET endpoint to confirm the provider has been removed. [Try in API Reference ↗](https://api-docs.neon.tech/reference/deleteneonauthoauthprovider) ## Manage redirect URI whitelist You can programmatically manage the redirect URI whitelist for your Neon Auth project using the Neon API. The following endpoints allow you to list, add, and delete domains from the redirect URI whitelist. ### List domains in redirect URI whitelist Lists the domains in the redirect URI whitelist for the specified project. Required parameters: - `project_id`: Your Neon project ID ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/domains' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` Example response: ```json { "domains": [ { "domain": "https://example.com", "auth_provider": "stack" }, { "domain": "https://app.example.com", "auth_provider": "stack" } ] } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/listneonauthredirecturiwhitelistdomains) ### Add domain to redirect URI whitelist Adds a domain to the redirect URI whitelist for the specified project. Required parameters: - `project_id`: Your Neon project ID - `domain`: The domain to add to the whitelist - `auth_provider`: The authentication provider (currently `"stack"`) ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/domains' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "domain": "https://example.com", "auth_provider": "stack" }' | jq ``` A successful POST returns no response body (`201 Created`). [Try in API Reference ↗](https://api-docs.neon.tech/reference/addneonauthdomaintoredirecturiwhitelist) ### Delete domain from redirect URI whitelist Deletes a domain from the redirect URI whitelist for the specified project. Required parameters: - `project_id`: Your Neon project ID - `auth_provider`: The authentication provider (currently `"stack"`) - `domains`: Array of domain objects to remove from the whitelist ```bash curl --request DELETE \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/domains' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "auth_provider": "stack", "domains": [ { "domain": "https://example.com" } ] }' | jq ``` A successful DELETE returns no response body. [Try in API Reference ↗](https://api-docs.neon.tech/reference/deleteneonauthdomainfromredirecturiwhitelist) ## Get email server configuration Gets the email server configuration for the specified project. Required parameters: - `project_id`: Your Neon project ID ```bash curl --request GET \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/email_server' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' | jq ``` Example response: ```json { "type": "shared" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/getneonauthemailserver) ## Update email server configuration Updates the email server configuration for the specified project. Required parameters: - `project_id`: Your Neon project ID Request body parameters: - `type`: Type of email server, `"shared"` or `"standard"` (standard = custom email server) - `host`: SMTP server hostname (required for custom SMTP) - `port`: SMTP server port (required for custom SMTP) - `username`: SMTP username (required for custom SMTP) - `password`: SMTP password (required for custom SMTP) - `sender_email`: Email address that will appear as the sender (required for custom SMTP) - `sender_name`: Name that will appear as the sender (required for custom SMTP) ```bash curl --request PATCH \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/email_server' \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "type": "standard", "host": "smtp.gmail.com", "port": 587, "username": "your-email@gmail.com", "password": "your-app-password", "sender_email": "noreply@yourcompany.com", "sender_name": "Your Company" }' | jq ``` Example response: ```json { "type": "standard", "host": "smtp.gmail.com", "port": 587, "username": "your-email@gmail.com", "password": "your-app-password", "sender_email": "noreply@yourcompany.com", "sender_name": "Your Company" } ``` [Try in API Reference ↗](https://api-docs.neon.tech/reference/updateneonauthemailserver) --- # Source: https://neon.com/llms/neon-auth-best-practices.txt # Neon Auth best practices & FAQ > The document outlines best practices and frequently asked questions regarding authentication in Neon, focusing on secure implementation and management of authentication features within the platform. ## Source - [Neon Auth best practices & FAQ HTML](https://neon.com/docs/neon-auth/best-practices): The original HTML version of this documentation **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Related docs: - [Get started](https://neon.com/docs/guides/neon-auth) - [Tutorial](https://neon.com/docs/guides/neon-auth-demo) - [How it works](https://neon.com/docs/guides/neon-auth-how-it-works) Sample project: - [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) ## Foreign keys and the users_sync table Since the `neon_auth.users_sync` table is updated asynchronously, there may be a brief delay (usually less than 1 second) before a user's data appears in the table. Consider this possible delay when deciding whether to use foreign keys in your schema. If you do choose to use foreign keys, make sure to specify an `ON DELETE` behavior that matches your needs: for example, `CASCADE` for personal data like todos or user preferences, and `SET NULL` for content like blog posts or comments that should persist after user deletion. ```sql -- For personal data that should be removed with the user (e.g., todos) CREATE TABLE todos ( id SERIAL PRIMARY KEY, task TEXT NOT NULL, user_id UUID NOT NULL REFERENCES neon_auth.users_sync(id) ON DELETE CASCADE, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP ); -- For content that should persist after user deletion (e.g., blog posts) CREATE TABLE posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, author_id UUID REFERENCES neon_auth.users_sync(id) ON DELETE SET NULL, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP ); ``` ## Querying user data When querying data that relates to users: - Use LEFT JOINs instead of INNER JOINs with the `users_sync` table in case of any sync delays. This ensures that all records from the main table (e.g., posts) are returned even if there's no matching user in the `users_sync` table yet. - Filter out deleted users since the table uses soft deletes (users are marked with a `deleted_at` timestamp when deleted). Here's an example of how to handle both in your queries: ```sql SELECT posts.*, neon_auth.users_sync.name as author_name FROM posts LEFT JOIN neon_auth.users_sync ON posts.author_id = neon_auth.users_sync.id WHERE neon_auth.users_sync.deleted_at IS NULL; ``` ## Restricting redirect domains **Warning**: Important: Before going to production, you should restrict authentication redirect URIs to trusted domains only. This prevents malicious actors from hijacking authentication flows and protects your users. For production deployments, you should explicitly whitelist the domains your app will use for authentication redirects (for example, your main website, admin panel). Go to the **Domains** section in the Neon Auth **Configuration** tab for your project and add each trusted domain needed for your app. You can add as many as you need. Only the domains on this list will be allowed for authentication redirects. All others will be blocked. ## Enabling row-level security (RLS) Row-Level Security (RLS) lets you enforce access control directly in your database, providing an extra layer of security for your app's data. To get started adding RLS to your Neon Auth project: 1. Go to the **Configuration** tab in your Neon Auth project. 2. Copy the **JWKS URL** shown in the **Claim project** section. _This JWKS URL allows Neon RLS to validate authentication tokens issued by Neon Auth._ 3. In your Neon project, open **Settings > RLS** and paste the JWKS URL. 4. Continue with the standard RLS setup: - Install the `pg_session_jwt` extension in your database. - Set up the `authenticated` and `anonymous` roles. - Add RLS policies to your tables. For these steps, you can follow the [Stack Auth + Neon RLS guide](https://neon.com/docs/guides/neon-rls-stack-auth) starting from [step 3](https://neon.com/docs/guides/neon-rls-stack-auth#3-install-the-pgsessionjwt-extension-in-your-database). Neon Auth uses Stack Auth under the hood, so the RLS integration process is the same from this point onward. For a full walkthrough, see [About Neon RLS](https://neon.com/docs/guides/neon-rls) and the [Neon RLS Tutorial](https://neon.com/docs/guides/neon-rls-tutorial). ## Production OAuth setup To securely use OAuth in production, you must configure your own OAuth credentials for each provider. Shared keys are for development only and will display "Stack Development" on the provider's consent screen, which is not secure or branded for your app. Follow these steps for each provider you use: ### Create an OAuth app On the provider's website, create an OAuth app and set the callback URL to the corresponding Neon Auth callback URL. Copy the client ID and client secret. Tab: Google [Google OAuth Setup Guide](https://developers.google.com/identity/protocols/oauth2#1.-obtain-oauth-2.0-credentials-from-the-dynamic_data.setvar.console_name-) **Callback URL:** ``` https://api.stack-auth.com/api/v1/auth/oauth/callback/google ``` Tab: GitHub [GitHub OAuth Setup Guide](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-app) **Callback URL:** ``` https://api.stack-auth.com/api/v1/auth/oauth/callback/github ``` Tab: Microsoft [Microsoft Azure OAuth Setup Guide](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app) **Callback URL:** ``` https://api.stack-auth.com/api/v1/auth/oauth/callback/microsoft ``` ### Enter OAuth credentials in Neon Auth Go to the **OAuth providers** section in the Neon Auth dashboard. Click **Add OAuth Provider**, choose your provider from the list, and enter the client ID and secret you copied from your provider's developer portal. ## Email server For development, Neon Auth uses a shared email server, which sends emails from `noreply@stackframe.co`. This is not ideal for production as users may not trust emails from an unfamiliar domain. For production, we recommend setting up an email server connected to your own domain. 1. **Setup Email Server** Configure your own email server and connect it to your domain (check your email server docs for details). 2. **Configure Neon Auth's Email Settings** Navigate to the **Auth** page in the Neon Console, go to the **Configuration** tab, find the **Email server** section, switch from "Shared" to "Custom SMTP server", enter your SMTP configurations, and save. For detailed configuration instructions, see [Email Configuration](https://neon.com/docs/neon-auth/email-configuration). ## Limitations **Important**: Neon Auth is not compatible with Private Link (Neon Private Networking). If you have Private Link enabled for your Neon project, Neon Auth will not work. This is because Neon Auth requires internet access to connect to third-party authentication providers, while Private Link restricts connections to private AWS networks. --- # Source: https://neon.com/llms/neon-auth-claim-project.txt # Claiming a Neon Auth project > The document outlines the process for claiming a Neon Auth project, detailing the steps required to authenticate and assume ownership of a project within the Neon platform. ## Source - [Claiming a Neon Auth project HTML](https://neon.com/docs/neon-auth/claim-project): The original HTML version of this documentation Neon Auth is powered by Stack Auth under the hood. By default, Neon manages your authentication for you, so you do not typically need to interact with Stack Auth directly. However, there are cases where you may want to take direct control of your authentication project in the Stack Auth dashboard. ## Why claim a project? Most Neon Auth features can be built using the SDKs, without claiming your project. Right now, you need to claim your project if you want to: - Add or manage OAuth providers (register client IDs/secrets, set callback URLs) - Enable production mode and enforce production security settings - Manage multiple projects or separate production and development environments directly in Stack Auth ## Claim via the Neon Console 1. Go to your project's **Auth** page, **Configuration** tab in the Neon Console. 2. Click **Claim project** in the Claim project section. 3. Follow the prompts to select the Stack Auth account that should receive ownership. After claiming, you'll have direct access to manage your project in the Stack Auth dashboard, while maintaining the integration with your Neon database. You can also find your current project ID here, as well as the JWKS URL you need to set up [RLS in your Neon Auth project](https://neon.com/docs/neon-auth/best-practices#enabling-row-level-security-rls). ## Claim via the API You can also claim your project programmatically: ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/transfer_ownership' \ --header 'authorization: Bearer $NEON_API_KEY' \ --data '{ "project_id": "project-id", "auth_provider": "stack" }' ``` Open the returned URL in your browser to complete the claim process. See [Neon Auth API Reference](https://neon.com/docs/guides/neon-auth-api#transfer-to-your-auth-provider) for more details. **Note**: After claiming, you'll still be able to access your project from the Neon Console, but you'll also have direct access from the Stack Auth dashboard. --- # Source: https://neon.com/llms/neon-auth-components-account-settings.txt # > The "" documentation outlines the configuration and management of user account settings within the Neon platform, detailing how to update personal information, manage security settings, and configure notification preferences. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/account-settings): The original HTML version of this documentation Renders an `` page with customizable sidebar items and optional full-page layout. ## Props - `fullPage` (optional): `boolean` — If true, renders the component in full-page mode. - `extraItems` (optional): `Array` — Additional items to be added to the sidebar. Each item should have the following properties: - `title`: `string` — The title of the item. - `content`: `React.ReactNode` — The content to be rendered for the item. - `subpath`: `string` — The subpath for the item's route. - `icon` (optional): `React.ReactNode` — The icon component for the item. Only used if `iconName` is not provided. - `iconName` (optional): `string` — The name of the Lucide icon to be used for the item. Only used if `icon` is not provided. ## Example ```tsx import { AccountSettings } from '@stackframe/stack'; export default function MyAccountPage() { return ( , subpath: '/custom', }, ]} /> ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-components.txt # Neon Auth Components > The "Neon Auth Components" documentation outlines the authentication components used within the Neon platform, detailing their configuration and integration to manage user access and security effectively. ## Source - [Neon Auth Components HTML](https://neon.com/docs/neon-auth/components/components): The original HTML version of this documentation Neon Auth provides a set of components for Next.js and React applications. To get started with Neon Auth in your Next.js application, follow the [setup guide](https://neon.com/docs/guides/neon-auth). ## Sign In and Sign Up [**``**](https://neon.com/docs/neon-auth/components/sign-in) [**``**](https://neon.com/docs/neon-auth/components/sign-up) [**``**](https://neon.com/docs/neon-auth/components/credential-sign-in) [**``**](https://neon.com/docs/neon-auth/components/credential-sign-up) ## User [**``**](https://neon.com/docs/neon-auth/components/user-button) [**``**](https://neon.com/docs/neon-auth/components/account-settings) ## Teams & Organizations [**``**](https://neon.com/docs/neon-auth/components/selected-team-switcher) ## Utilities - [StackHandler](https://neon.com/docs/neon-auth/components/stack-handler) - [StackProvider](https://neon.com/docs/neon-auth/components/stack-provider) - [StackTheme](https://neon.com/docs/neon-auth/components/stack-theme) > See each component's page for usage examples and customization options. --- # Source: https://neon.com/llms/neon-auth-components-credential-sign-in.txt # > The document "" outlines the implementation details for integrating credential-based sign-in functionality within Neon's authentication system. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/credential-sign-in): The original HTML version of this documentation A component that renders a sign-in form with email and password fields. For more information, see the [custom pages guide](https://neon.com/docs/neon-auth/customization/custom-pages). ## Props This component does not accept any props. ## Example ```tsx import { CredentialSignIn } from '@stackframe/stack'; export default function Page() { return (

    Sign In

    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-credential-sign-up.txt # > The document "" outlines the implementation details for the credential-based sign-up component in Neon's authentication system, detailing its structure and integration within the platform. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/credential-sign-up): The original HTML version of this documentation A component that renders a sign-up form with email and password fields. For more information, see the [custom pages guide](https://neon.com/docs/neon-auth/customization/custom-pages). ## Props - `noPasswordRepeat` (optional): `boolean` — If set to `true`, the form will not include a password repeat field. ## Example ```tsx import { CredentialSignUp } from '@stackframe/stack'; export default function Page() { return (

    Sign Up

    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-oauth-button-group.txt # > The document details the implementation of the `` component, which facilitates OAuth authentication integration for Neon applications by providing a standardized button group for various OAuth providers. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/oauth-button-group): The original HTML version of this documentation Renders all the ``s enabled for your Neon Auth project. **Note**: If there are no OAuth providers enabled, this component will be empty. ## Props - `type`: `'sign-in' | 'sign-up'` — Specifies whether the buttons text are for sign-in or sign-up (both are the same in terms of functionality). ## Example ```tsx import { OAuthButtonGroup } from '@stackframe/stack'; export default function Page() { return (

    Sign In

    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-oauth-button.txt # > The document describes the implementation and usage of the `` component in Neon, detailing its role in facilitating OAuth authentication within Neon's platform. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/oauth-button): The original HTML version of this documentation Renders a customized `` for various providers to initiate sign-in or sign-up processes. For more information, see the [custom pages guide](https://neon.com/docs/neon-auth/customization/custom-pages). ## Props - `provider`: `string` — The name of the OAuth provider (e.g., 'google', 'github', 'facebook'). - `type`: `'sign-in' | 'sign-up'` — Determines whether the button text is for signing in or signing up. (both are the same in terms of functionality) ## Example ```tsx import { OAuthButton } from '@stackframe/stack'; export default function Page() { return (

    Sign In

    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-selected-team-switcher.txt # > The document details the component, which facilitates team selection within Neon's authentication system, enabling users to switch between different teams efficiently. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/selected-team-switcher): The original HTML version of this documentation A React component for switching between teams. It displays a dropdown of teams and allows the user to select a team. For a comprehensive guide on using this component, see the [Team Selection documentation](https://neon.com/docs/neon-auth/concepts/team-selection). ## Props - `urlMap` (optional): `(team: Team) => string` — A function that maps a team to a URL. If provided, the component will navigate to this URL when a team is selected. - `selectedTeam` (optional): `Team` — The initially selected team. - `noUpdateSelectedTeam` (optional): `boolean` — If true, prevents updating the selected team in the user's settings when a new team is selected. Default is false. ## Example ```tsx import { SelectedTeamSwitcher } from '@stackframe/stack'; export default function Page() { return (

    Team Switcher

    `/team/${team.id}`} selectedTeam={currentTeam} noUpdateSelectedTeam={false} />
    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-sign-in.txt # > The document outlines the implementation details of the `` component for Neon, detailing its structure and functionality within the authentication process. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/sign-in): The original HTML version of this documentation Renders a sign-in component with customizable options. For more information, see the [custom pages guide](https://neon.com/docs/neon-auth/customization/custom-pages). ## Props - `fullPage` (optional): `boolean` — If true, renders the sign-in page in full-page mode. - `automaticRedirect` (optional): `boolean` — If true, redirects to afterSignIn/afterSignUp URL when user is already signed in without showing the 'You are signed in' message. - `extraInfo` (optional): `React.ReactNode` — Additional content to be displayed on the sign-in page. - `firstTab` (optional): `'magic-link' | 'password'` — Determines which tab is initially active. Defaults to 'magic-link' if not specified. ## Example ```tsx import { SignIn } from '@stackframe/stack'; export default function Page() { return (

    Sign In

    When signing in, you agree to our Terms } />
    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-sign-up.txt # > The document outlines the process for signing up for a Neon account, detailing the steps and components involved in the user authentication flow specific to Neon's platform. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/sign-up): The original HTML version of this documentation A component that renders a sign-up page with various customization options. For more information, see the [custom pages guide](https://neon.com/docs/neon-auth/customization/custom-pages). ## Props - `fullPage` (optional): `boolean` — If true, renders the sign-up page in full-page mode. - `automaticRedirect` (optional): `boolean` — If true, redirects to afterSignIn/afterSignUp URL when user is already signed in without showing the 'You are signed in' message. - `noPasswordRepeat` (optional): `boolean` — If true, removes the password confirmation field. - `extraInfo` (optional): `React.ReactNode` — Additional information to display on the sign-up page. - `firstTab` (optional): `'magic-link' | 'password'` — Determines which tab is initially active. Defaults to 'magic-link' if not specified. ## Example ```tsx import { SignUp } from '@stackframe/stack'; export default function Page() { return (

    Sign Up

    By signing up, you agree to our Terms } />
    ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-stack-handler.txt # > The documentation outlines the component's role in managing authentication stacks within Neon's architecture, detailing its configuration and integration processes for efficient user authentication handling. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/stack-handler): The original HTML version of this documentation Renders the appropriate authentication or account-related component based on the current route. For detailed usage instructions, see the manual section of the [setup guide](https://neon.com/docs/neon-auth). ## Props - `app`: `StackServerApp` — The Neon Auth server application instance. - `routeProps`: `NextRouteProps` — The Next.js route props, usually the first argument of the page component (see below) - `fullPage`: `boolean` — Whether to render the component in full-page mode. - `componentProps`: `{ [K in keyof Components]?: Partial> }` — Props to pass to the rendered components. ## Example ```tsx import { StackHandler } from '@stackframe/stack'; import { stackServerApp } from '@/stack/server'; export default function Handler(props: { params: any; searchParams: any }) { return ( ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-stack-provider.txt # > The `` documentation outlines the implementation and usage of the StackProvider component in Neon's authentication system, detailing its role in managing and organizing authentication-related components within the application stack. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/stack-provider): The original HTML version of this documentation A React component that provides Neon Auth context to its children. For detailed usage instructions, see the manual section of the [setup guide](https://neon.com/docs/neon-auth). ## Props - `children`: `React.ReactNode` — The child components to be wrapped by the StackProvider. - `app`: `StackClientApp | StackServerApp` — The Neon Auth app instance to be used. - `lang` (optional): `"en-US" | "de-DE" | "es-419" | "es-ES" | "fr-CA" | "fr-FR" | "it-IT" | "pt-BR" | "pt-PT"` — The language to be used for translations. - `translationOverrides` (optional): `Record` — A mapping of English translations to translated equivalents. These will take priority over the translations from the language specified in the `lang` property. Note that the keys are case-sensitive. You can find a full list of supported strings [on GitHub](https://github.com/stack-auth/stack-auth/blob/dev/packages/template/src/generated/quetzal-translations.ts). ## Example ```tsx import { StackProvider } from '@stackframe/stack'; import { stackServerApp } from '@/stack/server'; function App() { return ( {/* Your app content */} ); } ``` --- # Source: https://neon.com/llms/neon-auth-components-stack-theme.txt # > The `` documentation outlines the implementation details and usage of the StackTheme component within Neon's authentication system, detailing its role in managing theme configurations for consistent UI styling. ## Source - [ HTML](https://neon.com/docs/neon-auth/components/stack-theme): The original HTML version of this documentation A component that applies a theme to its children. For more information, see the [color and styles guide](https://neon.com/docs/neon-auth/customization/custom-styles). ## Props - `theme` (optional): `ThemeConfig` — Custom theme configuration to override the default theme. - `children` (optional): `React.ReactNode` — Child components to be rendered within the themed context. ## Example ```tsx const theme = { light: { primary: 'red', }, dark: { primary: '#00FF00', }, radius: '8px', } // ... {/* children */} ``` --- # Source: https://neon.com/llms/neon-auth-components-user-button.txt # UserButton Component > The UserButton Component documentation details the implementation and configuration of a user interface element in Neon that facilitates user authentication and profile management. ## Source - [UserButton Component HTML](https://neon.com/docs/neon-auth/components/user-button): The original HTML version of this documentation Renders a `` component with optional user information, color mode toggle, and extra menu items. ## Props - `showUserInfo`: `boolean` — Whether to display user information (display name and email) or only show the avatar. - `colorModeToggle`: `() => void | Promise` — Function to be called when the color mode toggle button is clicked. If specified, a color mode toggle button will be shown. - `extraItems`: `Array<{text: string, icon: React.ReactNode, onClick: Function}>` — Additional menu items to display. ## Example ```tsx 'use client'; import { UserButton } from '@stackframe/stack'; export default function Page() { return (

    User Button

    { console.log('color mode toggle clicked'); }} extraItems={[ { text: 'Custom Action', icon: , onClick: () => console.log('Custom action clicked'), }, ]} />
    ); } ``` --- # Source: https://neon.com/llms/neon-auth-concepts-backend-integration.txt # Backend Integration > The "Backend Integration" document outlines the process for integrating Neon's authentication system with backend services, detailing configuration steps and necessary API interactions specific to Neon's infrastructure. ## Source - [Backend Integration HTML](https://neon.com/docs/neon-auth/concepts/backend-integration): The original HTML version of this documentation To authenticate your endpoints, you need to send the user's access token in the headers of the request to your server, and then make a request to Neon Auth's server API to verify the user's identity. ## Sending requests to your server endpoints To authenticate your own server endpoints using Neon Auth's server API, you need to protect your endpoints by sending the user's access token in the headers of the request. On the client side, you can retrieve the access token from the `user` object by calling `user.getAuthJson()`. This will return an object containing `accessToken`. Then, you can call your server endpoint with these two tokens in the headers, like this: ```typescript const { accessToken } = await user.getAuthJson(); const response = await fetch('/api/users/me', { headers: { 'x-stack-access-token': accessToken, }, // your other options and parameters }); ``` ## Authenticating the user on the server endpoints Neon Auth provides two methods for authenticating users on your server endpoints: 1. **JWT Verification**: A fast, lightweight approach that validates the user's token locally without making external requests. While efficient, it provides only essential user information encoded in the JWT. 2. **REST API Verification**: Makes a request to Neon Auth's servers to validate the token and retrieve comprehensive user information. This method provides access to the complete, up-to-date user profile. ### Using JWT Tab: Node.js ```javascript // you need to install the jose library if it's not already installed import * as jose from 'jose'; // you can cache this and refresh it with a low frequency const jwks = jose.createRemoteJWKSet( new URL('https://api.stack-auth.com/api/v1/projects//.well-known/jwks.json') ); const accessToken = 'access token from the headers'; try { const { payload } = await jose.jwtVerify(accessToken, jwks); console.log('Authenticated user with ID:', payload.sub); } catch (error) { console.error(error); console.log('Invalid user'); } ``` ### Using the REST API Tab: Node.js ```javascript const url = 'https://api.stack-auth.com/api/v1/users/me'; const headers = { 'x-stack-access-type': 'server', 'x-stack-project-id': 'your Neon Auth project ID', 'x-stack-secret-server-key': 'your Neon Auth server key', 'x-stack-access-token': 'access token from the headers', }; const response = await fetch(url, { headers }); if (response.status === 200) { console.log('User is authenticated', await response.json()); } else { console.log('User is not authenticated', response.status, await response.text()); } ``` Tab: Python ```python import requests url = 'https://api.stack-auth.com/api/v1/users/me' headers = { 'x-stack-access-type': 'server', 'x-stack-project-id': 'your Neon Auth project ID', 'x-stack-secret-server-key': 'your Neon Auth server key', 'x-stack-access-token': 'access token from the headers', } response = requests.get(url, headers=headers) if response.status_code == 200: print('User is authenticated', response.json()) else: print('User is not authenticated', response.status_code, response.text) ``` --- # Source: https://neon.com/llms/neon-auth-concepts-custom-user-data.txt # Custom User Data > The "Custom User Data" documentation explains how Neon users can manage and implement custom user data within their authentication processes, detailing the structure and integration methods specific to Neon's platform. ## Source - [Custom User Data HTML](https://neon.com/docs/neon-auth/concepts/custom-user-data): The original HTML version of this documentation > How to store custom user metadata in Neon Auth Neon Auth allows storing additional user information through three types of metadata fields: 1. **clientMetadata**: Readable and writable from a [client](https://neon.com/docs/neon-auth/concepts/stack-app#client-vs-server). 2. **serverMetadata**: Readable and writable only from a [server](https://neon.com/docs/neon-auth/concepts/stack-app#client-vs-server). 3. **clientReadOnlyMetadata**: Readable from a client, writable only from a server. ## Client metadata You can use the `clientMetadata` field to store non-sensitive information that both the client and server can read and write. ```tsx await user.update({ clientMetadata: { mailingAddress: '123 Main St', }, }); // On the client: const user = useUser(); console.log(user.clientMetadata); ``` ## Server-side metadata For sensitive information, use the `serverMetadata` field. This ensures the data is only accessible and modifiable by the server. ```tsx const user = await stackServerApp.getUser(); await user.update({ serverMetadata: { secretInfo: 'This is a secret', }, }); // To read: const user = await stackServerApp.getUser(); console.log(user.serverMetadata); ``` ## Client read-only metadata Use `clientReadOnlyMetadata` for data that clients need to read but never modify, such as subscription status. ```tsx // On the server: const user = await stackServerApp.getUser(); await user.update({ clientReadOnlyMetadata: { subscriptionPlan: 'premium', }, }); // On the client: const user = useUser(); console.log(user.clientReadOnlyMetadata); ``` --- # Source: https://neon.com/llms/neon-auth-concepts-oauth.txt # OAuth Authentication > The document outlines the OAuth authentication process for Neon, detailing how to configure and implement OAuth to securely manage user access and authorization within the Neon platform. ## Source - [OAuth Authentication HTML](https://neon.com/docs/neon-auth/concepts/oauth): The original HTML version of this documentation > Using OAuth providers for authentication and API access Neon Auth comes with Google and GitHub OAuth providers pre-configured for authentication. When users sign in with these providers, their accounts are automatically connected, allowing you to access their connected accounts and make API calls on their behalf. **Info**: You cannot connect a user's accounts with shared OAuth keys. You must set up your own OAuth client ID and client secret in the Neon Auth dashboard. For more details, see [Production OAuth setup](https://neon.com/docs/neon-auth/best-practices#production-oauth-setup). Neon Auth currently supports Google, GitHub, and Microsoft as OAuth providers. ## Connected accounts A connected account represents an external OAuth provider account linked to your user. When a user signs in with OAuth, their account is automatically connected to that provider. You can access a user's connected account using the `useConnectedAccount` hook: ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export default function Page() { const user = useUser({ or: 'redirect' }); // Redirects to provider authorization if not already connected const account = user.useConnectedAccount('google', { or: 'redirect' }); return
    Google account connected
    ; } ``` ## Providing scopes Most providers have access control in the form of OAuth scopes. These are the permissions that the user will see on the authorization screen (eg. "Your App wants access to your calendar"). For instance, to read Google Drive content, you need the `https://www.googleapis.com/auth/drive.readonly` scope: ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export default function Page() { const user = useUser({ or: 'redirect' }); // Redirects to the Google authorization page, requesting access to Google Drive const account = user.useConnectedAccount('google', { or: 'redirect', scopes: ['https://www.googleapis.com/auth/drive.readonly'], }); // Account is always defined because of the redirect return
    Google Drive connected
    ; } ``` Check your provider's API documentation to find a list of available scopes. ## Retrieving the access token Once connected with an OAuth provider, obtain the access token with the `account.getAccessToken()` function. Check your provider's API documentation to understand how you can use this token to authorize the user in requests. ```tsx 'use client'; import { useEffect, useState } from 'react'; import { useUser } from '@stackframe/stack'; export default function Page() { const user = useUser({ or: 'redirect' }); const account = user.useConnectedAccount('google', { or: 'redirect', scopes: ['https://www.googleapis.com/auth/drive.readonly'], }); const { accessToken } = account.useAccessToken(); const [response, setResponse] = useState(); useEffect(() => { fetch('https://www.googleapis.com/drive/v3/files', { headers: { Authorization: `Bearer ${accessToken}` }, }) .then((res) => res.json()) .then((data) => setResponse(data)) .catch((err) => console.error(err)); }, [accessToken]); return
    {response ? JSON.stringify(response) : 'Loading...'}
    ; } ``` ## Sign-in default scopes To avoid showing the authorization page twice, you can already request scopes during the sign-in flow. This approach is optional. Some applications may prefer to request extra permissions only when needed, while others might want to obtain all necessary permissions upfront. To do this, edit the `oauthScopesOnSignIn` setting of your `stackServerApp`: ```tsx export const stackServerApp = new StackServerApp({ // ...your other settings... oauthScopesOnSignIn: { google: ['https://www.googleapis.com/auth/drive.readonly'], }, }); ``` ## Account merging strategies When a user attempts to sign in with an OAuth provider that matches an existing account, Neon Auth uses the following behavior: - If a user signs in with an OAuth provider that matches an existing account, Neon Auth will link the OAuth identity to the existing account - The user will be signed into their existing account - This requires both credentials to be verified for security **Note**: The available OAuth providers and their scopes are pre-configured in Neon Auth. Currently, Neon Auth does not support adding or modifying OAuth providers. ## Managing OAuth providers via the UI and API You can add, update, and remove OAuth providers directly in the Neon Auth dashboard UI. For advanced or automated workflows, you can also manage providers programmatically using the Neon Auth API. See [Manage OAuth providers via API](https://neon.com/docs/neon-auth/api#manage-oauth-providers-via-api) for detailed documentation and examples of all available endpoints. --- # Source: https://neon.com/llms/neon-auth-concepts-orgs-and-teams.txt # Organizations and Teams > The "Organizations and Teams" document outlines the structure and management of organizations and teams within Neon, detailing how users can create, manage, and collaborate within these entities. ## Source - [Organizations and Teams HTML](https://neon.com/docs/neon-auth/concepts/orgs-and-teams): The original HTML version of this documentation Teams provide a structured way to group users and manage their permissions. Users can belong to multiple teams simultaneously, allowing them to represent departments, B2B customers, or projects. The server can perform all operations on a team, but the client can only carry out some actions if the user has the necessary permissions. This applies to all actions that can be performed on a server/client-side `User` object and a `Team` object. ## Concepts ### Team permissions If you attempt to perform an action without the necessary team permissions, the function will throw an error. Always check if the user has the required permission before performing any action. Learn more about permissions [here](https://neon.com/concepts/permissions). Here is an example of how to check if a user has a specific permission on the client: ```tsx const user = useUser({ or: 'redirect' }); const team = user.useTeam('some-team-id'); if (!team) { return
    Team not found
    ; } const hasPermission = user.usePermission(team, '$invite_members'); if (!hasPermission) { return
    No permission
    ; } // Perform corresponding action like inviting a user ``` ### Team profile A user can have a different profile for each team they belong to (Note this is different to the user's personal profile). This profile contains information like `displayName` and `profileImageUrl`. The team profile can be left empty and it will automatically take the user's personal profile information. The team profile is visible to all the other users in the team that have the `$read_members` permission. ## Retrieving a user's teams You can list all teams a user belongs to using the `listTeams` or `useTeams` functions or fetch a specific team with `getTeam` or `useTeam`. These functions work on both clients and servers. Tab: Client Component ```tsx const user = useUser({ or: 'redirect' }); const allTeams = user.useTeams(); const someTeam = user.useTeam('some-team-id'); // May be null if the user is not a member of this team return (
    {allTeams.map(team => (
    {team.displayName}
    ))}
    {someTeam ? someTeam.displayName : 'Not a member of this team'}
    ); ``` Tab: Server Component ```tsx const user = await stackServerApp.getUser({ or: 'redirect' }); const allTeams = await user.listTeams(); const someTeam = await user.getTeam('some-team-id'); // May be null if the user is not a member of this team return (
    {allTeams.map(team => (
    {team.displayName}
    ))}
    {someTeam ? someTeam.displayName : 'Not a member of this team'}
    ); ``` ## Creating a team To create a team, use the `createTeam` function on the `User` object. The user will be added to the team with the default team creator permissions. ```tsx const team = await user.createTeam({ displayName: 'New Team', }); ``` To create a team on the server without adding a specific user, use the `createTeam` function on the `ServerApp` object: ```tsx const team = await stackServerApp.createTeam({ displayName: 'New Team', }); ``` ## Updating a team You can update a team with the `update` function on the `Team` object. On the client, the user must have the `$update_team` permission to perform this action. ```tsx await team.update({ displayName: 'New Name', }); ``` ## Custom team metadata You can store custom metadata on a team object, similar to the user object. The metadata can be any JSON object. - `clientMetadata`: Can be read and updated on both the client and server sides. - `serverMetadata`: Can only be read and updated on the server side. - `clientReadOnlyMetadata`: Can be read on both the client and server sides, but can only be updated on the server side. ```tsx await team.update({ clientMetadata: { customField: 'value', }, }); console.log(team.clientMetadata.customField); // 'value' ``` ## List users in a team You can list all users in a team with the `listUsers` function or the `useUsers` hook on the `Team` object. Note that if you want to get the team profile, you need to get it with `user.teamProfile`. On the client, the current user must have the `$read_members` permission in the team to perform this action. Tab: Client Component ```tsx // ... retrieve the team and ensure user has the necessary permissions const users = team.useUsers(); return (
    {users.map(user => (
    {user.teamProfile.displayName}
    ))}
    ); ``` Tab: Server Component ```tsx // ... retrieve the team const users = await team.listUsers(); return (
    {users.map(user => (
    {user.teamProfile.displayName}
    ))}
    ); ``` ## Get current user's team profile You can get the current user's team profile with the `getTeamProfile` or `useTeamProfile` function on the `User` object. This function returns the team profile for the team with the given ID. Tab: Client Component ```tsx const teamProfile = user.useTeamProfile(team); ``` Tab: Server Component ```tsx const teamProfile = await user.getTeamProfile(team); ``` ## Invite a user to a team You can invite a user to a team using the `inviteUser` function on the `Team` object. The user will receive an email with a link to join the team. On the client side, the current user must have the `$invite_members` permission to perform this action. ```tsx await team.inviteUser(email); ``` ## Adding a user to a team If you want to add a user to a team without sending an email, use the `addUser` function on the `ServerTeam` object. This function can only be called on the server side. ```tsx await team.addUser(user.id); ``` ## Removing a user from a team You can remove a user from a team with the `removeUser` function on the `Team` object. On the client side, the current user must have the `$remove_members` permission to perform this action. ```tsx await team.removeUser(user.id); ``` ## Leaving a team All users can leave a team without any permissions required. ```tsx const team = await user.getTeam('some-team-id'); await user.leaveTeam(team); ``` ## Deleting a team You can delete a team with the `delete` function on the `Team` object. On the client side, the current user must have the `$delete_team` permission to perform this action. ```tsx await team.delete(); ``` **Note**: Team creation and management is handled through the Neon Auth API. All team operations are performed programmatically using the provided functions and hooks. --- # Source: https://neon.com/llms/neon-auth-concepts-permissions.txt # App/User RBAC Permissions > The "Permissions & RBAC" document outlines the role-based access control (RBAC) system in Neon, detailing how permissions are assigned and managed to control user access to resources within the platform. ## Source - [App/User RBAC Permissions HTML](https://neon.com/docs/neon-auth/concepts/permissions): The original HTML version of this documentation > If you're looking for information about who can add or manage Neon Auth in your Neon project, see [Permissions overview](https://neon.com/docs/neon-auth/permissions-roles). Neon Auth supports two types of permissions for your application's users: - **Team permissions** control what a user can do within a specific team - **User permissions** control what a user can do globally, across the entire project Both permission types can be managed from the dashboard, and both support arbitrary nesting. ## Team Permissions Team permissions control what a user can do within each team. You can create and assign permissions to team members from the Neon Console. These permissions could include actions like `create_post` or `read_secret_info`, or roles like `admin` or `moderator`. Within your app, you can verify if a user has a specific permission within a team. Permissions can be nested to create a hierarchical structure. For example, an `admin` permission can include both `moderator` and `user` permissions. We provide tools to help you verify whether a user has a permission directly or indirectly. ### System Permissions Neon Auth comes with a few predefined team permissions known as system permissions. These permissions start with a dollar sign (`$`). While you can assign these permissions to members or include them within other permissions, you cannot modify them as they are integral to the Neon Auth backend system. ### Checking if a User has a Permission To check whether a user has a specific permission, use the `getPermission` method or the `usePermission` hook on the `User` object. This returns the `Permission` object if the user has it; otherwise, it returns `null`. Always perform permission checks on the server side for business logic, as client-side checks can be bypassed. Here's an example: Tab: Client Component ```tsx "use client"; import { useUser } from "@stackframe/stack"; export function CheckUserPermission() { const user = useUser({ or: 'redirect' }); const team = user.useTeam('some-team-id'); const permission = user.usePermission(team, 'read'); // Don't rely on client-side permission checks for business logic. return (
    {permission ? 'You have the read permission' : 'You shall not pass'}
    ); } ```` Tab: Server Component ```tsx import { stackServerApp } from '@/stack/server'; export default async function CheckUserPermission() { const user = await stackServerApp.getUser({ or: 'redirect' }); const team = await stackServerApp.getTeam('some-team-id'); const permission = await user.getPermission(team, 'read'); // This is a server-side check, so it's secure. return (
    {permission ? 'You have the read permission' : 'You shall not pass'}
    ); } ```` ### Listing All Permissions of a User To get a list of all permissions a user has, use the `listPermissions` method or the `usePermissions` hook on the `User` object. This method retrieves both direct and indirect permissions. Here is an example: Tab: Client Component ```tsx "use client"; import { useUser } from "@stackframe/stack"; export function DisplayUserPermissions() { const user = useUser({ or: 'redirect' }); const permissions = user.usePermissions(); return (
    {permissions.map(permission => (
    {permission.id}
    ))}
    ); } ```` Tab: Server Component ```tsx import { stackServerApp } from '@/stack/server'; export default async function DisplayUserPermissions() { const user = await stackServerApp.getUser({ or: 'redirect' }); const permissions = await user.listPermissions(); return (
    {permissions.map(permission => (
    {permission.id}
    ))}
    ); } ```` ### Granting a Permission to a User To grant a permission to a user, use the `grantPermission` method on the `ServerUser`. Here's an example: ```tsx const team = await stackServerApp.getTeam('teamId'); const user = await stackServerApp.getUser(); await user.grantPermission(team, 'read'); ``` ### Revoking a Permission from a User To revoke a permission from a user, use the `revokePermission` method on the `ServerUser`. Here's an example: ```tsx const team = await stackServerApp.getTeam('teamId'); const user = await stackServerApp.getUser(); await user.revokePermission(team, 'read'); ``` ## Project Permissions Project permissions are global permissions that apply to a user across the entire project, regardless of team context. These permissions are useful for handling things like premium plan subscriptions or global admin access. ### Checking if a User has a Project Permission To check whether a user has a specific project permission, use the `getPermission` method or the `usePermission` hook. Here's an example: Tab: Client Component ```tsx "use client"; import { useUser } from "@stackframe/stack"; export function CheckGlobalPermission() { const user = useUser({ or: 'redirect' }); const permission = user.usePermission('access_admin_dashboard'); return (
    {permission ? 'You can access the admin dashboard' : 'Access denied'}
    ); } ```` Tab: Server Component ```tsx import { stackServerApp } from '@/stack/server'; export default async function CheckGlobalPermission() { const user = await stackServerApp.getUser({ or: 'redirect' }); const permission = await user.getPermission('access_admin_dashboard'); return (
    {permission ? 'You can access the admin dashboard' : 'Access denied'}
    ); } ```` ### Listing All Project Permissions To get a list of all global permissions a user has, use the `listPermissions` method or the `usePermissions` hook: Tab: Client Component ```tsx "use client"; import { useUser } from "@stackframe/stack"; export function DisplayGlobalPermissions() { const user = useUser({ or: 'redirect' }); const permissions = user.usePermissions(); return (
    {permissions.map(permission => (
    {permission.id}
    ))}
    ); } ```` Tab: Server Component ```tsx import { stackServerApp } from '@/stack/server'; export default async function DisplayGlobalPermissions() { const user = await stackServerApp.getUser({ or: 'redirect' }); const permissions = await user.listPermissions(); return (
    {permissions.map(permission => (
    {permission.id}
    ))}
    ); } ```` ### Granting a Project Permission To grant a global permission to a user, use the `grantPermission` method: ```tsx const user = await stackServerApp.getUser(); await user.grantPermission('access_admin_dashboard'); ``` ### Revoking a Project Permission To revoke a global permission from a user, use the `revokePermission` method: ```tsx const user = await stackServerApp.getUser(); await user.revokePermission('access_admin_dashboard'); ``` > Currently, Neon Auth does not support creating or modifying permissions through the Neon Console. All permissions are pre-configured and cannot be changed. --- # Source: https://neon.com/llms/neon-auth-concepts-stack-app.txt # The StackApp Object > The document details the StackApp object within Neon, explaining its structure and role in managing authentication and authorization processes for applications using Neon's platform. ## Source - [The StackApp Object HTML](https://neon.com/docs/neon-auth/concepts/stack-app): The original HTML version of this documentation > The most important object in your Neon Auth integration By now, you may have seen the `useStackApp()` hook and the `stackServerApp` variable. Both return a `StackApp`, of type `StackClientApp` and `StackServerApp` respectively. Nearly all of Neon Auth's functionality is on your `StackApp` object. Think of this object as the "connection" from your code to Neon Auth's servers. Each app is always associated with one specific project ID (by default the one found in your environment variables). ## `getXyz`/`listXyz` vs. `useXyz` Most asynchronous functions on `StackApp` come in two flavors: `getXyz`/`listXyz` and `useXyz`. The former are asynchronous fetching functions which return a `Promise`, while the latter are React hooks that [suspend](https://react.dev/reference/react/Suspense) the current component until the data is available. Normally, you would choose between the two based on whether you are in a React Server Component or a React Client Component. However, there are some scenarios where you use `getXyz` on the client, for example as the callback of an `onClick` handler. ```tsx // server-component.tsx async function ServerComponent() { const app = stackServerApp; // returns a Promise, must be awaited const user = await app.getUser(); return
    {user.displayName}
    ; } // client-component.tsx ('use client'); function ClientComponent() { const app = useStackApp(); // returns the value directly const user = app.useUser(); return
    {user.displayName}
    ; } ``` ## Client vs. server `StackClientApp` contains everything needed to build a frontend application, for example the currently authenticated user. It requires a publishable client key in its initialization (usually set by the `NEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY` environment variable). `StackServerApp` has all the functionality of `StackClientApp`, but also some functions with elevated permissions. This requires a secret server key (usually set by the `STACK_SECRET_SERVER_KEY` environment variable), which **must always be kept secret**. **Note**: Some of the functions have different return types; for example, `StackClientApp.getUser()` returns a `Promise` while `StackServerApp.getUser()` returns a `Promise`. The `Server` prefix indicates that the object contains server-only functionality. --- # Source: https://neon.com/llms/neon-auth-concepts-team-selection.txt # Selecting a Team > The document "Selecting a Team" outlines the process for Neon users to choose and manage teams within the Neon platform, detailing steps for team selection and configuration. ## Source - [Selecting a Team HTML](https://neon.com/docs/neon-auth/concepts/team-selection): The original HTML version of this documentation A user can be a member of multiple teams, so most websites using teams will need a way to select a "current team" that the user is working on. There are two primary methods to accomplish this: - **Deep Link**: Each team has a unique URL, for example, `your-website.com/team/`. When a team is selected, it redirects to a page with that team's URL. - **Current Team**: When a user selects a team, the app stores the team as a global "current team" state. In this way, the URL of the current team might be something like `your-website.com/current-team`, and the URL won't change after switching teams. ## Deep Link Method The deep link method is generally recommended because it avoids some common issues associated with the current team method. If two users share a link while using deep link URLs, the receiving user will always be directed to the correct team's information based on the link. ## Current Team Method While the current team method can be simpler to implement, it has a downside. If a user shares a link, the recipient might see information about the wrong team (if their "current team" is set differently). This method can also cause problems when a user has multiple browser tabs open with different teams. ## Selected Team Switcher To facilitate team selection, Neon Auth provides a component that looks like this: You can import and use the `SelectedTeamSwitcher` component for the "current team" method. It updates the `selectedTeam` when a user selects a team: ```tsx import { SelectedTeamSwitcher } from '@stackframe/stack'; export function MyPage() { return (
    ); } ``` To combine the switcher with the deep link method, you can pass in `urlMap` and `selectedTeam`. The `urlMap` is a function to generate a URL based on the team information, and `selectedTeam` is the team that the user is currently working on. This lets you implement "deep link" + "most recent team". The component will update the `user.selectedTeam` with the `selectedTeam` prop: ```tsx `/team/${team.id}`} selectedTeam={team} /> ``` To implement the "deep link" + "default team" method, where you update the `selectedTeam` only when the user clicks "set to default team" or similar, pass `noUpdateSelectedTeam`: ```tsx `/team/${team.id}`} selectedTeam={team} noUpdateSelectedTeam /> ``` ## Example: Deep Link + Most Recent Team First, create a page at `/app/team/[teamId]/page.tsx` to display information about a specific team: ```tsx 'use client'; import { useUser, SelectedTeamSwitcher } from '@stackframe/stack'; export default function TeamPage({ params }: { params: { teamId: string } }) { const user = useUser({ or: 'redirect' }); const team = user.useTeam(params.teamId); if (!team) { return
    Team not found
    ; } return (
    `/team/${team.id}`} selectedTeam={team} />

    Team Name: {team.displayName}

    You are a member of this team.

    ); } ``` Next, create a page to display all teams at `/app/team/page.tsx`: ```tsx 'use client'; import { useRouter } from 'next/navigation'; import { useUser } from '@stackframe/stack'; export default function TeamsPage() { const user = useUser({ or: 'redirect' }); const teams = user.useTeams(); const router = useRouter(); const selectedTeam = user.selectedTeam; return (
    {selectedTeam && ( )}

    All Teams

    {teams.map((team) => ( ))}
    ); } ``` Now, if you navigate to `http://localhost:3000/team`, you should be able to see and interact with the teams. --- # Source: https://neon.com/llms/neon-auth-concepts-user-onboarding.txt # User Onboarding > The "User Onboarding" document outlines the process for new users to set up and access their Neon accounts, detailing steps for account creation, authentication, and initial configuration within the Neon platform. ## Source - [User Onboarding HTML](https://neon.com/docs/neon-auth/concepts/user-onboarding): The original HTML version of this documentation > Implementing a user onboarding page and collecting information on sign-up By default, Neon Auth collects information such as email addresses from OAuth providers. Sometimes, you may want to collect additional information from users during sign-up, for example a name or address. The most straightforward approach is to redirect users to an onboarding page right after they sign up. However, this is not recommended for the following reasons: 1. Users can accidentally (or purposefully) close or navigate away from the page before completing the onboarding. 2. Redirect URLs may vary depending on the context. For instance, if a user is redirected to a sign-in page after trying to access a protected page, they'll expect to return to the original protected page post-authentication. Instead, a more reliable strategy is to store an `onboarded` flag in the user's metadata and redirect users to the onboarding page if they haven't completed it yet. ## Example implementation Let's say you have an onboarding page that asks for an address and stores it in the user's [metadata](https://neon.com/docs/neon-auth/concepts/custom-user-data): ```jsx export default function OnboardingPage() { const user = useUser(); const router = useRouter(); const [address, setAddress] = useState(''); return ( <> setAddress(e.target.value)} /> ); } ``` **Note**: While the above implementation offers a basic onboarding process, users can still skip onboarding by directly sending an API request to update the `clientMetadata.onboarded` flag. If you want to ensure that onboarding cannot be bypassed on the API level, you should create a server endpoint to validate and store the data, then save the `onboarded` flag in the `clientReadonlyMetadata` on the server side after validation. Next, we can create a hook/function to check if the user has completed onboarding and redirect them to the onboarding page: Tab: Client Hook ```jsx 'use client'; import { useEffect } from 'react'; import { useUser } from '@stackframe/stack'; import { useRouter } from 'next/navigation'; export function useOnboarding() { const user = useUser(); const router = useRouter(); useEffect(() => { if (user && !user.clientMetadata?.onboarded) { router.push('/onboarding'); } }, [user]); } ``` Tab: Server Function ```jsx import { stackServerApp } from '@/stack/server'; import { redirect } from 'next/navigation'; export async function ensureOnboarded() { const user = await stackServerApp.getUser(); if (user && !user.serverMetadata?.onboarded) { redirect('/onboarding'); } } ``` To add an Onboarding page and guarantee users hit it, create a dedicated `/onboarding` page and gate protected pages with the hook/server function above so users are always redirected there until completion. On that page, validate details on your backend and then set the `onboarded` metadata flag. Follow the guide on [Custom User Data](https://neon.com/docs/neon-auth/concepts/custom-user-data) for implementation details. Here are examples of how to use the hook and server function in your components: Tab: Client Component ```jsx import { useOnboarding } from '@/app/onboarding-hooks'; import { useUser } from '@stackframe/stack'; export default function HomePage() { useOnboarding(); const user = useUser(); return
    Welcome to the app, {user.displayName}
    ; } ``` Tab: Server Component ```jsx import { ensureOnboarded } from '@/app/onboarding-functions'; import { stackServerApp } from '@/stack/server'; export default async function HomePage() { await ensureOnboarded(); const user = await stackServerApp.getUser(); return
    Welcome to the app, {user.displayName}
    ; } ``` --- # Source: https://neon.com/llms/neon-auth-create-users.txt # Creating users with Neon Auth > The document "Creating users with Neon Auth" outlines the process for creating and managing user accounts within the Neon platform using Neon Auth, detailing steps for user registration and authentication setup. ## Source - [Creating users with Neon Auth HTML](https://neon.com/docs/neon-auth/create-users): The original HTML version of this documentation You can create users in Neon Auth using either the Neon Console or the API. This is useful for development, testing, or manual onboarding, as it lets you quickly add users and see their profiles appear in your `neon_auth.users_sync` table. ## Creating users in the Console You can create users directly from the Neon Console — no app integration or API required. 1. Go to your project's **Auth** page in the Neon Console. 2. Click **Create user** and fill in the required details. 3. The new user will appear in your user list and be available in your database. ## Creating users via the API You can also create users programmatically using the Neon API: ```bash curl --request POST \ --url 'https://console.neon.tech/api/v2/projects/auth/user' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "project_id": "project-id", "auth_provider": "stack", "email": "user@example.com", "name": "Example User" }' ``` The new user will be created and automatically available in your `neon_auth.users_sync` table. For more details, see [Neon Auth API Reference](https://neon.com/docs/guides/neon-auth-api#create-users). --- # Source: https://neon.com/llms/neon-auth-customization-custom-pages.txt # Custom Pages > The "Custom Pages" documentation outlines how Neon users can create and manage custom authentication pages, detailing configuration options and integration steps specific to Neon's platform. ## Source - [Custom Pages HTML](https://neon.com/docs/neon-auth/customization/custom-pages): The original HTML version of this documentation Custom pages allow you to take full control over the layout and logic flow of authentication pages in your application. Instead of using the default pages provided by Neon Auth, you can build your own using our built-in components or low-level functions. By default, `StackHandler` creates all authentication pages you need, however, you can replace them with your own custom implementations for a more tailored user experience. ## Simple Example For example, if you want to create a custom sign-in page with a customized title on the top, you can create a file at `app/signin/page.tsx`: ```tsx import { SignIn } from '@stackframe/stack'; export default function CustomSignInPage() { return (

    My Custom Sign In page

    ); } ``` Then you can instruct the Stack app in `stack.ts` to use your custom sign in page: ```tsx export const stackServerApp = new StackServerApp({ // ... // add these three lines urls: { signIn: '/signin', }, }); ``` You are now all set! If you visit the `/signin` page, you should see your custom sign in page. When users attempt to access a protected page or navigate to the default `/handler/sign-in` URL, they will automatically be redirected to your new custom sign-in page. ## Building From Scratch While the simple approach above lets you customize the layout while using Stack's pre-built components, sometimes you need complete control over both the UI and authentication logic. We also provide the low-level functions powering our components, so that you can build your own logic. For example, to build a custom OAuth sign-in button, create a file at `app/signin/page.tsx`: ```tsx 'use client'; import { useStackApp } from '@stackframe/stack'; export default function CustomOAuthSignIn() { const app = useStackApp(); return (

    My Custom Sign In page

    ); } ``` Again, edit the Stack app in `stack.ts` to use your custom sign in page: ```tsx export const stackServerApp = new StackServerApp({ // ... // add these three lines urls: { signIn: '/signin', }, }); ``` As above, visit the `/signin` page to see your newly created custom OAuth page. --- # Source: https://neon.com/llms/neon-auth-customization-custom-styles.txt # Colors and styles > The "Colors and Styles" documentation outlines how Neon users can customize the appearance of their authentication pages by modifying color schemes and styles through CSS, enabling tailored visual integration with their applications. ## Source - [Colors and styles HTML](https://neon.com/docs/neon-auth/customization/custom-styles): The original HTML version of this documentation Customizing the styles of your Neon Auth components allows you to maintain your brand identity while leveraging the pre-built functionality. This approach is ideal when you want to quickly align the authentication UI with your application's design system without building custom components from scratch. Neon Auth's theming system uses a React context to store colors and styling variables that can be easily overridden. You can customize the following color variables to match your brand: - `background`: Main background color of the application - `foreground`: Main text color on the background - `card`: Background color for card elements - `cardForeground`: Text color for card elements - `popover`: Background color for popover elements like dropdowns - `popoverForeground`: Text color for popover elements - `primary`: Primary brand color, used for buttons and important elements - `primaryForeground`: Text color on primary-colored elements - `secondary`: Secondary color for less prominent elements - `secondaryForeground`: Text color on secondary-colored elements - `muted`: Color for muted or disabled elements - `mutedForeground`: Text color for muted elements - `accent`: Accent color for highlights and emphasis - `accentForeground`: Text color on accent-colored elements - `destructive`: Color for destructive actions like delete buttons - `destructiveForeground`: Text color on destructive elements - `border`: Color used for borders - `input`: Border color for input fields - `ring`: Focus ring color for interactive elements And some other variables: - `radius`: border radius of components like buttons, inputs, etc. These variables are CSS variables so you can use any valid CSS color syntax like `hsl(0, 0%, 0%)`, `black`, `#fff`, `rgb(255, 0, 0)`, etc. The colors can be different for light and dark mode, allowing you to create a cohesive experience across both themes. You can pass these into the `StackTheme` component (in your `layout.tsx` file if you followed the Getting Started guide) as follows: ```jsx const theme = { light: { primary: 'red', }, dark: { primary: '#00FF00', }, radius: '8px', } // ... {/* children */} ``` --- # Source: https://neon.com/llms/neon-auth-customization-dark-mode.txt # Dark/light mode > The document outlines how to customize the Neon interface by switching between dark and light modes, detailing the steps and configurations necessary for users to implement these visual themes. ## Source - [Dark/light mode HTML](https://neon.com/docs/neon-auth/customization/dark-mode): The original HTML version of this documentation Neon Auth components support light and dark mode out of the box. All UI components automatically adapt their colors, shadows, and contrast levels based on the selected theme. You can switch between light and dark mode using [next-themes](https://github.com/pacocoursey/next-themes) (or any other library that changes the `data-theme` or `class` to `dark` or `light` attribute of the `html` element). Here is an example of how to set up next-themes with Neon Auth (find more details in the [next-themes documentation](https://github.com/pacocoursey/next-themes)): ## Install next-themes: ```bash npm install next-themes ``` ## Add the `ThemeProvider` to your `layout.tsx` file: ```jsx import { ThemeProvider } from 'next-themes' export default function Layout({ children }) { return ( {/* ThemeProvider enables theme switching throughout the application. defaultTheme="system" uses the user's system preference as the default. attribute="class" applies the theme by changing the class on the html element. */} {/* StackTheme ensures Neon Auth components adapt to the current theme */} {children} ) } ``` ## Build a color mode switcher component: ```jsx 'use client'; import { useTheme } from 'next-themes'; export default function ColorModeSwitcher() { // useTheme hook provides the current theme and a function to change it const { theme, setTheme } = useTheme(); return ( ); } ``` Now if you put the `ColorModeSwitcher` component in your app, you should be able to switch between light and dark mode. There should be no flickering or re-rendering of the page after reloading. --- # Source: https://neon.com/llms/neon-auth-customization-internationalization.txt # Internationalization > The "Internationalization" document outlines the process for customizing language settings in Neon applications, detailing how to implement and manage multilingual support within the platform. ## Source - [Internationalization HTML](https://neon.com/docs/neon-auth/customization/internationalization): The original HTML version of this documentation Internationalization (i18n) allows your application to support multiple languages, making it accessible to users worldwide. Neon Auth provides built-in internationalization support for its components, enabling you to offer a localized authentication experience with minimal effort. ## Setup Internationalization with Neon Auth is very straightforward. Simply pass the `lang` prop to the `StackProvider` component, and all the pages will be translated to the specified language. ```jsx ... ... ... ``` By default, if no language is provided, it will be set to `en-US`. You can choose which languages to use by employing your own methods, such as storing the language in `localStorage` or using the user's browser language. ## Supported languages - `en-US`: English (United States) - `de-DE`: German (Germany) - `es-419`: Spanish (Latin America) - `es-ES`: Spanish (Spain) - `fr-CA`: French (Canada) - `fr-FR`: French (France) - `it-IT`: Italian (Italy) - `pt-BR`: Portuguese (Brazil) - `pt-PT`: Portuguese (Portugal) - `zh-CN`: Chinese (China) - `zh-TW`: Chinese (Taiwan) - `ja-JP`: Japanese (Japan) - `ko-KR`: Korean (South Korea) --- # Source: https://neon.com/llms/neon-auth-demo.txt # Neon Auth Demo > The "Neon Auth Demo" documentation outlines the process for setting up and demonstrating authentication features within the Neon platform, guiding users through configuration steps and example scenarios to implement secure access controls. ## Source - [Neon Auth Demo HTML](https://neon.com/docs/neon-auth/demo): The original HTML version of this documentation Related docs: - [Get started](https://neon.com/docs/guides/neon-auth) Sample project: - [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) In this tutorial, we'll walk through some user authentication flows using our [demo todos](https://github.com/neondatabase-labs/neon-auth-demo-app) application, showing how Neon Auth automatically syncs user profiles to your database, and how that can simplify your code. **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). ## Prerequisites Follow the readme to set up the [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app): Next.js + Drizzle + Stack Auth > _Use the keys provided by Neon Auth in your project's **Auth** page rather than creating a separate Stack Auth project._ ```bash git clone https://github.com/neondatabase-labs/neon-auth-demo-app.git ``` ## Instant user sync Sign up as Bob, then as Doug in a private window. Open the Neon Console to see their profiles automatically synced: _No custom sync logic required; profiles are always up-to-date in Postgres._ ## Easy user-data joins Add a few todos as each user. Here's the code that builds the todo list: ```ts {7-11,14,15} showLineNumbers // app/actions.tsx export async function getTodos() { return fetchWithDrizzle(async (db) => { return db .select({ id: schema.todos.id, task: schema.todos.task, isComplete: schema.todos.isComplete, insertedAt: schema.todos.insertedAt, owner: { id: users.id, email: users.email, }, }) .from(schema.todos) .leftJoin(users, eq(schema.todos.ownerId, users.id)) .orderBy(asc(schema.todos.insertedAt)); }); } ``` Highlighted code shows: - User email and ID included in each todo response - Automatic join between todos and the `users_sync` table _User data is always available for joins and queries; no extra API calls or sync logic needed._ ## Collaboration and analytics Switch between Bob and Doug's accounts to mark some todos complete - the dashboard updates in real-time. Here's the code that populates this live dashboard: ```ts showLineNumbers // app/users-stats.tsx async function getUserStats() { const stats = await fetchWithDrizzle((db) => db .select({ email: users.email, // [!code highlight] name: users.name, // [!code highlight] complete: db.$count(todos, and(eq(todos.isComplete, true), eq(todos.ownerId, users.id))), total: db.$count(todos, eq(todos.ownerId, users.id)), }) .from(users) // [!code highlight] .innerJoin(todos, eq(todos.ownerId, users.id)) // [!code highlight] .where(isNull(users.deletedAt)) .groupBy(users.email, users.name, users.id) ); return stats; } ``` Highlighted code shows: - Direct access to synced user profiles - Simple joins between app data and user data _Build multi-user features without writing complex sync code; user data is always available and up-to-date in your database._ ## Safe user deletion and data cleanup _Let's simulate what happens when an admin deletes a user account._ To test this, delete Doug's profile directly from the database: ```sql DELETE FROM neon_auth.users_sync WHERE email LIKE '%doug%'; ``` Refresh the todo list, and... ugh, _ghost todos!_ 👻👻 _(Doug may be gone, but his todos aren't.)_ _In production, this could happen automatically when a user is deleted from your auth provider. Either way, their todos become orphaned - no owner, but still in your database._ **Why?** The starter schema does not include `ON DELETE CASCADE`, so when a user profile is deleted (whether by admin action or auth sync), their todos are left behind. This can clutter your app and confuse your users. ## Safe user deletion and data cleanup: FIXED _Let's prevent ghost todos with proper database constraints._ Adding foreign key contraints is a best practice we explain in more detail [here](https://neon.com/docs/guides/neon-auth-best-practices#foreign-keys-and-the-users_sync-table). **Step 1: Clean up your demo** Orphaned todos will block adding a foreign key. Use Neon's instant restore to roll back your branch: Go to the **Restore** page in the Neon Console and roll back to a few minutes ago, before we deleted Doug. > If you have the Neon CLI installed, you can also use: ```bash > neon branches restore production ^self@ --preserve-under-name production_backup > ``` **Step 2: Add the foreign key constraint** ```sql ALTER TABLE todos ADD CONSTRAINT todos_owner_id_fk FOREIGN KEY (owner_id) REFERENCES neon_auth.users_sync(id) ON DELETE CASCADE; ``` **Step 3: Test it** Delete Doug's profile again: ```sql DELETE FROM neon_auth.users_sync WHERE email LIKE '%doug%'; ``` Refresh the todo list. This time Doug's todos are automatically cleaned up! _With this constraint in place, when Neon Auth syncs a user deletion, all their todos will be cleaned up automatically._ ## Recap With Neon Auth, you get: - ✅ Synchronized user profiles - ✅ Efficient data queries - ✅ Automated data cleanup (with foreign key constraints) - ✅ Simple user data integration Neon Auth handles user-profile synchronization, and a single foreign key takes care of cleanup. Read more about Neon Auth in: - [How it works](https://neon.com/docs/guides/neon-auth-how-it-works) - [Get started](https://neon.com/docs/guides/neon-auth) --- # Source: https://neon.com/llms/neon-auth-email-configuration.txt # Email configuration > The "Email configuration" document details the setup process for configuring email notifications within the Neon platform, including necessary parameters and settings to ensure proper email delivery functionality. ## Source - [Email configuration HTML](https://neon.com/docs/neon-auth/email-configuration): The original HTML version of this documentation **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Related docs: - [Best practices](https://neon.com/docs/neon-auth/best-practices) - [Admin API](https://neon.com/docs/neon-auth/api) ## Overview Neon Auth sends transactional emails for features like user invites, password resets, email verification, and security notifications. To get you started quickly, every Neon project comes with a **Shared Email Server**, which sends emails from `noreply@stackframe.co`. However, for any production application, you must configure a **Custom SMTP Server** to ensure emails are sent reliably from your own domain. This guide explains how to set up your custom SMTP server and why it's essential for production use. ## Shared vs. Custom SMTP Understanding the difference between the shared and custom email servers is crucial for moving from development to production. ### Shared email server (for development) The shared server is enabled by default and is intended for development and testing only. Emails are sent from `noreply@stackframe.co`, which is not ideal for production since users may not trust emails from an unfamiliar domain and it offers no branding for your application. Connecting your own SMTP provider is the recommended approach for all production applications. - **Professional branding:** Emails are sent from your own domain (e.g., `noreply@yourcompany.com`), building user trust and reinforcing your brand identity. - **Improved deliverability:** By controlling your own sender reputation with properly configured domains (SPF, DKIM, DMARC), your emails are far more likely to land in the user's inbox. ## How to set up a custom SMTP server Connecting your own SMTP server is a straightforward process. ### Choose an SMTP provider First, you need an account with an email service that provides SMTP credentials. If you don't already have one, here are a few popular providers: - [Amazon SES](https://docs.aws.amazon.com/ses/latest/dg/send-email-smtp.html) - [Resend](https://resend.com/docs/send-with-smtp) - [Postmark](https://postmarkapp.com/smtp-service) - [Twilio SendGrid](https://sendgrid.com/en-us/solutions/email-api/smtp-service) - [Mailgun](https://www.mailgun.com/features/smtp-server/) - [Brevo](https://www.brevo.com/free-smtp-server/) ## SMTP configuration details Once you have an account, find the SMTP credentials in your provider's dashboard you will need these details to complete the setup. | Field | Description | Example | | ---------------- | ----------------------------------------------------------- | -------------------------- | | **Host** | Your email server address. | `smtp-relay.brevo.com` | | **Port** | The SMTP port. `587` is the standard for secure submission. | `587` | | **Username** | The username for authenticating with your SMTP server. | `your-smtp-username` | | **Password** | The password or API key provided by your email service. | `your-api-key-or-password` | | **Sender Email** | The "from" address users will see | `noreply@yourcompany.com` | | **Sender Name** | The display name that appears alongside the sender email. | `Your Company` | ### Configure Neon Auth Navigate to your Neon project and enter your SMTP credentials: 1. Go to your project's **Auth** page and select the **Configuration** tab. 2. Find the **Email server** section. 3. Switch the option from "Shared" to **Custom SMTP server**. 4. Enter the SMTP credentials you obtained from your email provider. 5. Click **Save** to apply the changes. 6. Use the **Send test email** feature to verify that your configuration is correct. ## API configuration You can also configure your email server settings programmatically using the Neon API. This is useful for automated setups or managing multiple projects. ```bash curl --request PATCH \ --url 'https://console.neon.tech/api/v2/projects/{project_id}/auth/email_server' \ --header 'authorization: Bearer YOUR_NEON_API_KEY' \ --header 'content-type: application/json' \ --data '{ "type": "standard", "host": "smtp-relay.brevo.com", "port": 587, "username": "your-smtp-username", "password": "your-app-password", "sender_email": "noreply@yourcompany.com", "sender_name": "Your Company" }' ``` ## Production best practices To ensure your transactional emails are reliable, secure, and professional, follow these best practices when configuring your production environment. - **Configure DKIM, SPF, and DMARC** These email authentication standards are essential for proving to inbox providers that your emails are legitimate. Properly configuring them is the most important step you can take to prevent your emails from being marked as spam. Your email provider will have guides on how to set these up for your domain. - **Implement CAPTCHA on Authentication forms** Protect your sign-up and password reset forms with a CAPTCHA service (e.g., hCaptcha, Cloudflare Turnstile, Vercel BotID). This is the most effective way to prevent bots from creating fake accounts or spamming your account verification and password reset flows, which can harm your sender reputation and lead to your domain being blocklisted. Whenever possible, use Neon Auth components such as `` and `` within your authentication pages, rather than relying on automatically generated pages. These components allow you to add custom logic, including CAPTCHA integration, for enhanced security and flexibility. See [Neon Auth Components](https://neon.com/docs/neon-auth/components/components) for all available options. - **Encourage social logins (OAuth)** Whenever possible, prioritize social sign-ins (e.g., Google, GitHub). This reduces your application's reliance on email-based flows (like verification and password resets). - **Separate transactional and marketing emails** Never use the same domain or IP address for both transactional emails (password resets, invites) and marketing emails (newsletters, promotions). The high volume and potential spam complaints associated with marketing can damage your sender reputation, preventing transactional emails from being delivered. - **Keep email templates clean and focused** If you are customizing the default Neon Auth template in the Stack Auth dashboard, ensure your changes maintain a clear, transactional focus. - **Avoid promotional content:** Do not include marketing calls-to-action, sales language, or unnecessary links. - **Be direct:** Get straight to the point (e.g., "Please click on the following button to verify your email: [verification button]"). - **Minimize images and complex styling:** Heavy HTML and multiple images can increase your spam score. - **Sanitize user data:** If you include user-provided data (like a name) in an email, ensure it is properly sanitized to prevent security vulnerabilities. - **Plan for high volume events** If you anticipate a large number of sign-ups at once (e.g., from a product launch or marketing campaign), contact your SMTP provider beforehand. Many providers have systems that automatically flag and penalize sudden, unexpected spikes in email volume. Working with them can ensure your sending limits are temporarily raised and your account remains in good standing. --- # Source: https://neon.com/llms/neon-auth-get-started-accessing-user-data.txt # Accessing User Data > The "Accessing User Data" documentation outlines the procedures for Neon users to securely retrieve and manage user data within the Neon database environment. ## Source - [Accessing User Data HTML](https://neon.com/docs/neon-auth/get-started/accessing-user-data): The original HTML version of this documentation > Reading and writing user information, and protecting pages You can build custom components that access the current user in your app. This guide covers the functions and hooks that let you do this. ## Client Component basics The `useUser()` hook returns the current user in a Client Component. By default, it will return `null` if the user is not signed in. ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export function MyClientComponent() { const user = useUser(); return
    {user ? `Hello, ${user.displayName ?? 'anon'}` : 'You are not logged in'}
    ; } ``` You can also use `useUser({ or: "redirect" })` to automatically redirect to the sign-in page if the user is not signed in. ## Server Component basics Since `useUser()` is a stateful hook, you can't use it on server components. Instead, import `stackServerApp` and call `getUser()`: ```tsx import { stackServerApp } from '@/stack/server'; export default async function MyServerComponent() { const user = await stackServerApp.getUser(); return
    {user ? `Hello, ${user.displayName ?? 'anon'}` : 'You are not logged in'}
    ; } ``` ## Protecting a page You can protect a page in three ways: - In Client Components with `useUser({ or: "redirect" })` - In Server Components with `await getUser({ or: "redirect" })` - With middleware **Client Component:** ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export default function MyProtectedClientComponent() { useUser({ or: 'redirect' }); return

    You can only see this if you are logged in

    ; } ``` **Server Component:** ```tsx import { stackServerApp } from '@/stack/server'; export default async function MyProtectedServerComponent() { await stackServerApp.getUser({ or: 'redirect' }); return

    You can only see this if you are logged in

    ; } ``` **Middleware:** ```tsx export async function middleware(request) { const user = await stackServerApp.getUser(); if (!user) { return Response.redirect(new URL('/handler/sign-in', request.url)); } return Response.next(); } ``` ## User data You can update attributes on a user object with the `user.update()` function (if your white-labeled setup allows it): ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export default function MyClientComponent() { const user = useUser(); return ( ); } ``` You can also store custom user data in the `clientMetadata`, `serverMetadata`, or `clientReadonlyMetadata` fields. ## Signing out You can sign out the user by redirecting them to `/handler/sign-out` or by calling `user.signOut()`: ```tsx 'use client'; import { useUser } from '@stackframe/stack'; export default function SignOutButton() { const user = useUser(); return user ? : 'Not signed in'; } ``` ## Example: Custom profile page Stack automatically creates a user profile on sign-up. Here's an example page that displays this information: ```tsx 'use client'; import { useUser, useStackApp, UserButton } from '@stackframe/stack'; export default function PageClient() { const user = useUser(); const app = useStackApp(); return (
    {user ? (

    Welcome, {user.displayName ?? 'unnamed user'}

    Your e-mail: {user.primaryEmail}

    ) : (

    You are not logged in

    )}
    ); } ``` --- # Source: https://neon.com/llms/neon-auth-get-started-components-overview.txt # Neon Auth Components > The "Neon Auth Components" documentation outlines the various authentication components within the Neon platform, detailing their roles and interactions to facilitate secure user authentication and authorization processes. ## Source - [Neon Auth Components HTML](https://neon.com/docs/neon-auth/get-started/components-overview): The original HTML version of this documentation > Pre-built Next.js components to make your life easier After setup, you can use these pre-built components to quickly add authentication features to your app. For the full documentation of all available components, see the [components reference](https://neon.com/docs/neon-auth/components/components). ## UserButton The `UserButton` component shows the user's avatar and opens a dropdown with various user settings on click. ```tsx import { UserButton } from '@stackframe/stack'; export default function Page() { return ; } ``` ## SignIn and SignUp These components show a sign-in and sign-up form, respectively. ```tsx import { SignIn } from '@stackframe/stack'; export default function Page() { return ; } ``` All Neon Auth components are modular and built from smaller primitives. For example, the `SignIn` component is composed of: - An `OAuthButtonGroup`, which itself is composed of multiple `OAuthButton` components - A `MagicLinkSignIn`, which has a text field and calls `signInWithMagicLink()` - A `CredentialSignIn`, which has two text fields and calls `signInWithCredential()` You can use these components individually to build a custom sign-in experience. To change the default sign-in URL to your own, see the documentation on [custom pages](https://neon.com/docs/neon-auth/customization/custom-pages). ## More Components Neon Auth has many more components available. For a comprehensive list, see the [components reference](https://neon.com/docs/neon-auth/components). --- # Source: https://neon.com/llms/neon-auth-how-it-works.txt # How Neon Auth works > The document "How Neon Auth works" explains the authentication mechanisms used by Neon, detailing the processes and protocols involved in securing user access to Neon's database services. ## Source - [How Neon Auth works HTML](https://neon.com/docs/neon-auth/how-it-works): The original HTML version of this documentation Related docs: - [Get started](https://neon.com/docs/guides/neon-auth) - [Tutorial](https://neon.com/docs/guides/neon-auth-demo) Sample project: - [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) **Neon Auth** simplifies user management by bundling auth with your database, so your user data is always available right from Postgres. No custom integration required. **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). ## How it works When you set up Neon Auth, we create a `neon_auth` schema in your database. As users authenticate and manage their profiles in Neon Auth, you'll see them appear in your list of users on the **Auth** page. **User data is immediately available in your database** User data is available in the `neon_auth.users_sync` table shortly after the Neon Auth processes the updates. Here's an example query to inspect the synchronized data: ```sql SELECT * FROM neon_auth.users_sync; ``` | id | name | email | created_at | updated_at | deleted_at | raw_json | | ----------- | ------------- | ----------------- | ------------------- | ------------------- | ---------- | -------------------------------- | | d37b6a30... | Jordan Rivera | jordan@company.co | 2025-05-09 16:15:00 | null | null | `{\"id\": \"d37b6a30...\", ...}` | | 51e491df... | Sam Patel | sam@startup.dev | 2025-02-27 18:36:00 | 2025-02-27 18:36:00 | null | `{\"id\": \"51e491df...\", ...}` | The following columns are included in the `neon_auth.users_sync` table: - `raw_json`: Complete user profile as JSON - `id`: The unique ID of the user - `name`: The user's display name - `email`: The user's primary email - `created_at`: When the user signed up - `deleted_at`: When the user was deleted, if applicable (nullable) - `updated_at`: When the user was last updated, if applicable (nullable) Updates to user profiles in Neon Auth are automatically reflected in your database. **Note**: Do not try to change the `neon_auth.users_sync` table name. It's needed for the synchronization process to work correctly. Let's take a look at how Neon Auth simplifies database operations in a typical todos application, specifically when associating todos with users. ## Before Neon Auth Without Neon Auth, you would typically need to: 1. Create and manage your own `users` table to store user information in your database. 2. Implement synchronization logic to keep this `users` table in sync with your authentication provider. This includes handling user creation and, crucially, user updates and deletions. 3. Create a `todos` table that references your `users` table using a foreign key. Here's how you would structure your database and perform insert operations _without_ Neon Auth: ### 1. Create a `users` table: ```sql CREATE TABLE users ( id TEXT PRIMARY KEY, -- User ID from your auth provider (TEXT type) email VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255), -- ... other user fields created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMPTZ ); ``` ### 2. Insert a user into the `users` table: To insert this user into your database when a new user is created in your auth provider, you might set up a webhook endpoint. Here's an example of a simplified webhook handler that would receive a `user.created` event from your auth provider and insert the user into your `users` table: ```typescript // Webhook handler to insert a user into the 'users' table for a 'user.created' event import { db } from '@/db'; export async function POST(request: Request) { await checkIfRequestIsFromAuthProvider(request); // Validate request authenticity using headers, etc. const payload = await request.json(); // Auth Provider webhook payload // Extract user data from the webhook payload const userId = payload.user_id; const email = payload.email_address; const name = payload.name; try { await db.query( `INSERT INTO users (id, email, name) VALUES ($1, $2, $3)`, [userId, email, name] ); return new Response('User added successfully', { status: 200 }); } catch (error) { console.error('Database error inserting user:', error); // Retry logic, error handling, etc. as needed // Send notification to on-call team, etc to check why the insert operation failed return new Response('Error inserting user into database', { status: 500 }); } } ``` **Note**: - This code snippet only handles the `user.created` event. To achieve complete synchronization, you would need to write separate webhook handlers for `user.updated`, `user.deleted`, and potentially other event types. Each handler adds complexity and requires careful error handling, security considerations, and ongoing maintenance. - The provided webhook example is a simplified illustration, and a production-ready solution would necessitate more robust error handling, security measures, and potentially queueing mechanisms to ensure reliable synchronization. ### 3. Create a `todos` table with a foreign key to the `users` table: ```sql CREATE TABLE todos ( id SERIAL PRIMARY KEY, task TEXT NOT NULL, user_id TEXT NOT NULL REFERENCES users(id) ON DELETE CASCADE, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP ); ``` ### 4. Insert a todo, referencing the `users` table: ```sql INSERT INTO todos (task, user_id) VALUES ('Buy groceries', 'user-id-123'); ``` ## After Neon Auth With Neon Auth, Neon automatically creates and manages the `neon_auth.users_sync` table. User profiles are stored automatically in your database, so you can directly rely on this table for up-to-date user data, simplifying your database operations. Here's how you would structure your `todos` table and perform insert operations _with_ Neon Auth: ### Users table `neon_auth.users_sync` table is automatically created and kept in sync by Neon Auth (no action needed from you) and is available for direct use in your schema and queries. Here is the table structure as discussed above: ```sql -- schema of neon_auth.users_sync table ( automatically created by Neon Auth ) id TEXT PRIMARY KEY, raw_json JSONB, name TEXT, email TEXT, created_at TIMESTAMPTZ, deleted_at TIMESTAMPTZ, updated_at TIMESTAMPTZ ``` #### 1. Create a `todos` table with a foreign key to the `neon_auth.users_sync` table: ```sql CREATE TABLE todos ( id SERIAL PRIMARY KEY, task TEXT NOT NULL, user_id TEXT NOT NULL REFERENCES neon_auth.users_sync(id) ON DELETE CASCADE, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP ); ``` #### 2. Insert a todo, referencing the `neon_auth.users_sync` table: ```sql INSERT INTO todos (task, user_id) ``` --- # Source: https://neon.com/llms/neon-auth-overview.txt # Neon Auth > The "Neon Auth Overview" document outlines the authentication mechanisms and configurations available in Neon, detailing how users can securely manage access and permissions within the platform. ## Source - [Neon Auth HTML](https://neon.com/docs/neon-auth/overview): The original HTML version of this documentation Neon Auth brings authentication and user management natively to your Neon Postgres database. **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for Neon Auth](https://neon.com/docs/ai/ai-rules-neon-auth) to help your AI assistant generate better code for implementing authentication with Neon. ## Why Neon Auth? Neon Auth helps you move faster by handling the auth stack for you: - **Add auth to your app in minutes** — SDKs for Next.js and React - **No more custom sync code** — user profiles are always up-to-date in your database, ready for SQL joins and analytics. - **Built-in support for teams, roles, and permissions**. ## Get started - [Next.js Quickstart](https://neon.com/docs/neon-auth/quick-start/nextjs): Quickstart for Next.js - [React Quickstart](https://neon.com/docs/neon-auth/quick-start/react): Quickstart for React - [JavaScript Quickstart](https://neon.com/docs/neon-auth/quick-start/javascript): Quickstart for JavaScript ## Explore Neon Auth - [How it Works](https://neon.com/docs/guides/neon-auth-how-it-works): How Neon Auth keeps your user data in sync - [Demo & Tutorial](https://neon.com/docs/neon-auth/demo): See Neon Auth in action - [Best Practices](https://neon.com/docs/neon-auth/best-practices): Tips, patterns, and troubleshooting ## Build with Neon Auth - [Components](https://neon.com/docs/neon-auth/components/components): Components for building with Neon Auth - [Next.js SDK](https://neon.com/docs/neon-auth/sdk/nextjs/overview): Next.js SDK and API reference - [React SDK](https://neon.com/docs/neon-auth/sdk/react/overview): React SDK and API reference ## Templates & Demo Apps - [Next.js Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app): Explore the open-source Next.js demo app - [React Template](https://github.com/neondatabase-labs/neon-auth-react-template): Starter template for React + Neon Auth - [Vanilla TS Template](https://github.com/neondatabase-labs/neon-auth-ts-template): Vanilla TypeScript + Neon Auth template --- # Source: https://neon.com/llms/neon-auth-permissions-roles.txt # Permissions overview > The "Permissions & roles in Neon Auth" document outlines the roles and permissions framework within Neon, detailing how to manage user access and security settings effectively. ## Source - [Permissions overview HTML](https://neon.com/docs/neon-auth/permissions-roles): The original HTML version of this documentation Neon Auth has two different permission systems: - [Project permissions](https://neon.com/docs/neon-auth/permissions-roles#project-permissions) for managing Neon Auth as a feature in your Neon projects - [App/user permissions](https://neon.com/docs/neon-auth/permissions-roles#appuser-permissions) for controlling what your app's users can do within your application. ## Project permissions **Who can add and manage Neon Auth in your Neon project** These permissions control who can configure Neon Auth itself within your Neon organization. They're based on your Neon organization roles (Admin, Member, Collaborator). **What they control:** - Adding or removing Neon Auth from your project - Claiming ownership of the auth provider project (ejecting the project to Stack Auth) - Generating SDK keys for your application - Creating users from the Neon Auth UI ### Permission Matrix | Action | Admin | Member | Collaborator | | ----------------- | :---: | :----: | :----------: | | Install Neon Auth | ✅ | ❌ | ❌ | | Remove Neon Auth | ✅ | ❌ | ❌ | | Claim project | ✅ | ❌ | ❌ | | Generate SDK Keys | ✅ | ❌ | ❌ | | Create users | ✅ | ✅ | ✅ | ### In a nutshell - **Admins** can perform all Neon Auth operations including installation, configuration, and user management - **Members** can create users but cannot modify Neon Auth settings - **Collaborators** can create users but cannot modify Neon Auth settings For more information about organization roles and permissions, see [User roles and permissions](https://neon.com/docs/manage/organizations#user-roles-and-permissions). ## App/user permissions **What your app's users can do within your application** These permissions control what your application's end users can do once they're authenticated. They're managed through your application code using Neon Auth's RBAC system. **What they control:** - Team-based permissions (e.g., "moderator", "read_secret_info") - Global project permissions (e.g., "premium_access", "admin_dashboard") - Hierarchical permission structures - Server-side permission checks For detailed information about implementing role-based access control for your application's users, see [App/User RBAC Permissions](https://neon.com/docs/neon-auth/concepts/permissions). --- # Source: https://neon.com/llms/neon-auth-quick-start-drizzle.txt # Using Neon Auth with Drizzle ORM > The document outlines the process for integrating Neon Auth with Drizzle ORM, detailing configuration steps and code examples to enable authentication within a Neon database environment. ## Source - [Using Neon Auth with Drizzle ORM HTML](https://neon.com/docs/neon-auth/quick-start/drizzle): The original HTML version of this documentation Neon Auth simplifies user management by automatically synchronizing user data into a `neon_auth.users_sync` table within your Neon Postgres database. This powerful feature allows you to treat user profiles as regular database rows, enabling you to create foreign key relationships, perform SQL joins, and apply row-level security (RLS) policies directly against your user data. [Drizzle ORM](https://orm.drizzle.team/) provides first-class support for Neon Auth through a dedicated helper function, making it easy to integrate the `users_sync` table into your application's schema without manual configuration or schema introspection. This guide explains how to use the `usersSync` helper from Drizzle to connect your application's tables to user data. ## The `usersSync` helper Instead of defining the schema for the `neon_auth.users_sync` table manually, you can import the `usersSync` helper directly from the `drizzle-orm/neon` package. This helper provides a complete, type-safe schema definition for the table. To use it, simply import it into your schema file: ```typescript import { usersSync } from 'drizzle-orm/neon'; ``` ## `users_sync` table schema The `usersSync` helper exposes the following columns, which are automatically populated by Neon Auth: | Column | Type | Description | | :---------- | :------------------------- | :--------------------------------------------------------------- | | `id` | `text` (Primary Key) | The unique identifier for the user. | | `name` | `text` (nullable) | The user's full name. | | `email` | `text` (nullable) | The user's primary email address. | | `rawJson` | `jsonb` | The complete user object from the auth provider, in JSON format. | | `createdAt` | `timestamp with time zone` | The timestamp when the user was created. | | `updatedAt` | `timestamp with time zone` | The timestamp when the user was last updated. | | `deletedAt` | `timestamp with time zone` | The timestamp when the user was deleted (soft delete). | ## Creating a foreign key relationship The most common use case for the `usersSync` helper is to establish a foreign key relationship between your application's tables and the user data. This ensures data integrity and allows you to easily associate data with the user who owns it. Let's consider a simple `todos` table where each todo item must belong to a user. ### Define your application schema In your Drizzle schema file (e.g., `app/db/schema.ts`), define your `todos` table and use the `usersSync` helper to create a reference to the user's `id`. ```typescript import { pgTable, text, timestamp, bigint, boolean } from 'drizzle-orm/pg-core'; import { usersSync } from 'drizzle-orm/neon'; // Define a `todos` table that links to the `users_sync` table export const todos = pgTable('todos', { id: bigint('id', { mode: 'bigint' }).primaryKey().generatedByDefaultAsIdentity(), task: text('task').notNull(), isComplete: boolean('is_complete').notNull().default(false), insertedAt: timestamp('inserted_at', { withTimezone: true }).defaultNow().notNull(), // Create a foreign key to the `users_sync` table ownerId: text('owner_id') .notNull() .references(() => usersSync.id), }); ``` ### Understand the relationship The key part of the schema above is the `ownerId` column: ```typescript ownerId: text('owner_id') .notNull() .references(() => usersSync.id), ``` This code does the following: - Creates a column named `owner_id` of type `text`. - Ensures the column cannot be null (`.notNull()`). - Establishes a foreign key constraint that references the `id` column in the `neon_auth.users_sync` table, which is represented by `usersSync.id`. With this relationship in place, your database will enforce that every todo must be associated with a valid user. ### Querying with joins Because `users_sync` is a real database table, you can now perform standard SQL `JOIN` operations to fetch user data alongside your application data in a single, efficient query. For example, you can retrieve all todos along with the email address of the user who owns them: ```typescript import { db } from '@/app/db'; import { todos } from '@/app/db/schema'; import { usersSync } from 'drizzle-orm/neon'; import { eq } from 'drizzle-orm'; export async function getTodosWithOwners() { const results = await db .select({ task: todos.task, isComplete: todos.isComplete, ownerEmail: usersSync.email, }) .from(todos) .leftJoin(usersSync, eq(todos.ownerId, usersSync.id)); return results; } ``` ## Summary By using Drizzle ORM's `usersSync` helper, you can seamlessly integrate Neon Auth user data into your application's database schema. This enables you to build powerful, data-consistent features by leveraging standard SQL capabilities like foreign keys and joins, all without writing complex data synchronization logic. --- # Source: https://neon.com/llms/neon-auth-quick-start-javascript.txt # Neon Auth for JavaScript > The "Neon Auth for JavaScript" documentation outlines the steps to integrate Neon's authentication system into JavaScript applications, detailing setup, configuration, and usage for seamless user authentication. ## Source - [Neon Auth for JavaScript HTML](https://neon.com/docs/neon-auth/quick-start/javascript): The original HTML version of this documentation Other frameworks: - [Neon Auth for Next.js](https://neon.com/docs/neon-auth/quick-start/nextjs) - [Neon Auth for React](https://neon.com/docs/neon-auth/quick-start/react) Sample project: - [Vanilla TS Template](https://github.com/neondatabase-labs/neon-auth-ts-template) **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Neon Auth lets you add authentication to your app in seconds — user data is synced directly to your Neon Postgres database, so you can query and join it just like any other table. ## Add Neon Auth to a project Go to [pg.new](https://pg.new) to create a new Neon project. Once your project is ready, open your project's **Auth** page and click **Enable Neon Auth** to get started. ## Get your Neon Auth keys On the **Configuration** tab, select your framework to get the **Environment variables** you need to integrate Neon Auth and connect to your database. You can use these keys right away to get started, or [skip ahead](https://neon.com/docs/neon-auth/quick-start/javascript#create-users-in-the-console-optional) to try out **user creation** in the Neon Console. ```bash # Neon Auth environment variables for JavaScript/Node STACK_PROJECT_ID=YOUR_NEON_AUTH_PROJECT_ID STACK_PUBLISHABLE_CLIENT_KEY=YOUR_NEON_AUTH_PUBLISHABLE_KEY STACK_SECRET_SERVER_KEY=YOUR_NEON_AUTH_SECRET_KEY # Your Neon connection string DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` **Note** Are you a Vercel user?: If you're using the [Vercel-Managed Integration](https://vercel.com/marketplace/neon), the integration automatically sets these environment variables for you in Vercel when you connect a Vercel project to a Neon database. [Learn more](https://neon.com/docs/guides/vercel-managed-integration#environment-variables-set-by-the-integration). ## Set up your app **Clone our template** for the fastest way to see Neon Auth in action. ```bash git clone https://github.com/neondatabase-labs/neon-auth-ts-template.git ``` Or **add Neon Auth** to an existing project. #### Install the JavaScript SDK ```bash npm install @stackframe/js ``` #### Use your environment variables Paste the Neon Auth environment variables from [Step 2](https://neon.com/docs/neon-auth/quick-start/javascript#get-your-neon-auth-keys) into your `.env` or `.env.local` file. ## Configure Neon Auth client ```js // stack/server.js import { StackServerApp } from '@stackframe/js'; export const stackServerApp = new StackServerApp({ projectId: process.env.STACK_PROJECT_ID, publishableClientKey: process.env.STACK_PUBLISHABLE_CLIENT_KEY, secretServerKey: process.env.STACK_SECRET_SERVER_KEY, tokenStore: 'memory', }); ``` ## Test your integration 1. Create a test user in the Console (see [Step 4](https://neon.com/docs/neon-auth/quick-start/javascript#create-users-in-the-console-optional)) and copy its ID. 2. Create `src/test.ts`: ```ts import 'dotenv/config'; import { stackServerApp } from './stack/server.js'; async function main() { const user = await stackServerApp.getUser('YOUR_USER_ID_HERE'); console.log(user); } main().catch(console.error); ``` 3. Run your test script however you like: ```bash shouldWrap # if you have a dev/test script in package.json npm run dev # or directly: npx dotenv -e .env.local -- tsx src/test.ts ``` You should see your test user's record printed in the console. ## Create users in the Console (optional) You can create test users directly from the Neon Console — no app integration required. This is useful for development or testing. Now you can [see your users in the database](https://neon.com/docs/neon-auth/quick-start/javascript#see-your-users-in-the-database). ## See your users in the database As users sign up or log in — through your app or by creating test users in the Console — their profiles are synced to your Neon database in the `neon_auth.users_sync` table. Query your users table in the SQL Editor to see your new user: ```sql SELECT * FROM neon_auth.users_sync; ``` | id | name | email | created_at | updated_at | deleted_at | raw_json | | ----------- | --------- | --------------- | ------------------- | ------------------- | ---------- | ---------------------------- | | 51e491df... | Sam Patel | sam@startup.dev | 2025-02-12 19:43... | 2025-02-12 19:46... | null | `{"id": "51e491df...", ...}` | ## Next Steps Want to learn more or go deeper? - [How Neon Auth works](https://neon.com/docs/guides/neon-auth-how-it-works) — See a before and after showing the benefits of having your user data right in your database - [Neon Auth tutorial](https://neon.com/docs/guides/neon-auth-demo) — Walk through our demo app for more examples of how Neon Auth can simplify your code - [Best Practices & FAQ](https://neon.com/docs/guides/neon-auth-best-practices) — Tips, patterns, and troubleshooting - [Neon Auth API Reference](https://neon.com/docs/guides/neon-auth-api) — Automate and manage Neon Auth via the API --- # Source: https://neon.com/llms/neon-auth-quick-start-nextjs.txt # Neon Auth for Next.js > The document "Neon Auth for Next.js" provides a quick-start guide for integrating Neon authentication into Next.js applications, detailing setup instructions and code examples specific to Neon's authentication services. ## Source - [Neon Auth for Next.js HTML](https://neon.com/docs/neon-auth/quick-start/nextjs): The original HTML version of this documentation Other frameworks: - [Neon Auth for React](https://neon.com/docs/neon-auth/quick-start/react) - [Neon Auth for JavaScript](https://neon.com/docs/neon-auth/quick-start/javascript) Sample project: - [Next.js Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Neon Auth lets you add authentication to your app in seconds — user data is synced directly to your Neon Postgres database, so you can query and join it just like any other table. ## Add Neon Auth to a project Go to [pg.new](https://pg.new) to create a new Neon project. Once your project is ready, open your project's **Auth** page and click **Enable Neon Auth** to get started. ## Get your Neon Auth keys On the **Configuration** tab, select your framework to get the **Environment variables** you need to integrate Neon Auth and connect to your database. You can use these keys right away to get started, or [skip ahead](https://neon.com/docs/neon-auth/quick-start/nextjs#create-users-in-the-console-optional) to try out **user creation** in the Neon Console. ```bash # Neon Auth environment variables for Next.js NEXT_PUBLIC_STACK_PROJECT_ID=YOUR_NEON_AUTH_PROJECT_ID NEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY=YOUR_NEON_AUTH_PUBLISHABLE_KEY STACK_SECRET_SERVER_KEY=YOUR_NEON_AUTH_SECRET_KEY # Your Neon connection string DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` **Note** Are you a Vercel user?: If you're using the [Vercel-Managed Integration](https://vercel.com/marketplace/neon), the integration automatically sets these environment variables for you in Vercel when you connect a Vercel project to a Neon database. [Learn more](https://neon.com/docs/guides/vercel-managed-integration#environment-variables-set-by-the-integration). ## Set up your app **Clone our template** for the fastest way to see Neon Auth in action. ```bash git clone https://github.com/neondatabase-labs/neon-auth-nextjs-template.git ``` Or **add Neon Auth** to an existing project. #### Run the setup wizard ```bash npx @stackframe/init-stack@latest --no-browser ``` This sets up auth routes, layout wrappers, and handlers automatically for Next.js (App Router). #### Use your environment variables Paste the Neon Auth environment variables from [Step 2](https://neon.com/docs/neon-auth/quick-start/nextjs#get-your-neon-auth-keys) into your `.env.local` file. Then `npm run dev` to start your dev server. #### Test your integration Go to [http://localhost:3000/handler/sign-up](http://localhost:3000/handler/sign-up) in your browser. Create a user or two, and you can see them [show up immediately](https://neon.com/docs/neon-auth/quick-start/nextjs#see-your-users-in-the-database) in your database. ## Create users in the Console (optional) You can create test users directly from the Neon Console — no app integration required. This is useful for development or testing. Now you can [see your users in the database](https://neon.com/docs/neon-auth/quick-start/nextjs#see-your-users-in-the-database). ## See your users in the database As users sign up or log in — through your app or by creating test users in the Console — their profiles are synced to your Neon database in the `neon_auth.users_sync` table. Query your users table in the SQL Editor to see your new user: ```sql SELECT * FROM neon_auth.users_sync; ``` | id | name | email | created_at | updated_at | deleted_at | raw_json | | ----------- | --------- | --------------- | ------------------- | ------------------- | ---------- | ---------------------------- | | 51e491df... | Sam Patel | sam@startup.dev | 2025-02-12 19:43... | 2025-02-12 19:46... | null | `{"id": "51e491df...", ...}` | ## Next Steps Want to learn more or go deeper? - [How Neon Auth works](https://neon.com/docs/guides/neon-auth-how-it-works) — See a before and after showing the benefits of having your user data right in your database - [Neon Auth tutorial](https://neon.com/docs/guides/neon-auth-demo) — Walk through our demo app for more examples of how Neon Auth can simplify your code - [Best Practices & FAQ](https://neon.com/docs/guides/neon-auth-best-practices) — Tips, patterns, and troubleshooting - [Neon Auth API Reference](https://neon.com/docs/guides/neon-auth-api) — Automate and manage Neon Auth via the API --- # Source: https://neon.com/llms/neon-auth-quick-start-react.txt # Neon Auth for React > The "Neon Auth for React" documentation guides users on integrating Neon's authentication features into React applications, detailing setup, configuration, and implementation steps for seamless user authentication. ## Source - [Neon Auth for React HTML](https://neon.com/docs/neon-auth/quick-start/react): The original HTML version of this documentation Other frameworks: - [Neon Auth for Next.js](https://neon.com/docs/neon-auth/quick-start/nextjs) - [Neon Auth for JavaScript](https://neon.com/docs/neon-auth/quick-start/javascript) Sample project: - [React Template](https://github.com/neondatabase-labs/neon-auth-react-template) **Note** Beta: **Neon Auth** is in beta and ready to use. We're actively improving it based on feedback from developers like you. Share your experience in our [Discord](https://discord.gg/92vNTzKDGp) or via the [Neon Console](https://console.neon.tech/app/projects?modal=feedback). Neon Auth lets you add authentication to your app in seconds — user data is synced directly to your Neon Postgres database, so you can query and join it just like any other table. ## Add Neon Auth to a project Go to [pg.new](https://pg.new) to create a new Neon project. Once your project is ready, open your project's **Auth** page and click **Enable Neon Auth** to get started. ## Get your Neon Auth keys On the **Configuration** tab, select your framework to get the **Environment variables** you need to integrate Neon Auth and connect to your database. You can use these keys right away to get started, or [skip ahead](https://neon.com/docs/neon-auth/quick-start/react#create-users-in-the-console-optional) to try out **user creation** in the Neon Console. ```bash # Neon Auth environment variables for React (Vite) VITE_STACK_PROJECT_ID=YOUR_NEON_AUTH_PROJECT_ID VITE_STACK_PUBLISHABLE_CLIENT_KEY=YOUR_NEON_AUTH_PUBLISHABLE_KEY STACK_SECRET_SERVER_KEY=YOUR_NEON_AUTH_SECRET_KEY # Your Neon connection string DATABASE_URL=YOUR_NEON_CONNECTION_STRING ``` **Note** Are you a Vercel user?: If you're using the [Vercel-Managed Integration](https://vercel.com/marketplace/neon), the integration automatically sets these environment variables for you in Vercel when you connect a Vercel project to a Neon database. [Learn more](https://neon.com/docs/guides/vercel-managed-integration#environment-variables-set-by-the-integration). ## Set up your app **Clone our template** for the fastest way to see Neon Auth in action. ```bash git clone https://github.com/neondatabase-labs/neon-auth-react-template.git ``` Or **add Neon Auth** to an existing project. ### Install the React SDK Make sure you have a [React project](https://react.dev/learn/creating-a-react-app) set up. We show an example here of a Vite React project with React Router. ```bash npm install @stackframe/react ``` ### Use your environment variables Paste the Neon Auth environment variables from the [Get your Neon Auth keys](https://neon.com/docs/neon-auth/quick-start/react#get-your-neon-auth-keys) section into your `.env.local` file. ## Configure Neon Auth client A basic example of how to set up the Neon Auth client in `stack.ts` in your `src` directory: ```tsx import { StackClientApp } from '@stackframe/react'; import { useNavigate } from 'react-router-dom'; export const stackClientApp = new StackClientApp({ projectId: import.meta.env.VITE_STACK_PROJECT_ID, publishableClientKey: import.meta.env.VITE_STACK_PUBLISHABLE_CLIENT_KEY, tokenStore: 'cookie', redirectMethod: { useNavigate }, }); ``` ## Update your app to use the provider and handler In your `src/App.tsx`: ```tsx import { StackHandler, StackProvider, StackTheme } from '@stackframe/react'; import { Suspense } from 'react'; import { BrowserRouter, Route, Routes, useLocation } from 'react-router-dom'; import { stackClientApp } from './stack'; function HandlerRoutes() { const location = useLocation(); return ( ); } export default function App() { return ( The "Neon Auth concepts" document explains the authentication mechanisms and configurations within the Neon database platform, detailing how users can securely manage access and permissions. ## Source - [Neon Auth concepts HTML](https://neon.com/docs/neon-auth/tutorial): The original HTML version of this documentation Related docs: - [About Neon Auth](https://neon.com/docs/guides/neon-auth) - [Manage Neon Auth using the API](https://neon.com/docs/guides/neon-auth-api) Sample project: - [Neon Auth Demo App](https://github.com/neondatabase-labs/neon-auth-demo-app) Modern application development is becoming increasingly reliant on third-party authentication providers like [Clerk](https://clerk.com), [Stack Auth](https://stack-auth.com), etc., to handle secure user management. While these platforms excel at streamlining login workflows and protecting sensitive data, developers frequently encounter a hidden challenge: maintaining parity between external identity records and their application's database. Profile updates, role changes, and user deletions in your authentication service don't automatically reflect in your application's data layer. Today, developers typically address this gap through several approaches: - **Webhooks**: Many providers offer real-time event notifications (e.g., `user.updated`) to trigger immediate updates in your system. - **Polling**: Periodically querying the auth provider's API checks for changes, but this approach introduces latency and risks hitting rate limits. - **Login-time sync**: Fetching fresh profile data during authentication ensures accuracy for active users at the expense of increased latency while also leaving stale data for inactive accounts. While these methods partially mitigate the problem, they often require writing custom synchronization scripts, implementing brittle listeners, and manually reconciling data discrepancies – turning a theoretical time-saver into an ongoing maintenance burden. Neon Auth offers a streamlined solution to this common challenge. Instead of grappling with complex synchronization methods, Neon Auth automatically synchronizes user profiles directly to your Neon Postgres database. This eliminates the need for manual updates, ensuring accurate, real-time data. You gain the benefits of efficient, automated user data management while retaining complete control over your core application information ## A typical user data synchronization scenario To illustrate the benefits of Neon Auth, let's consider a common scenario where you need to synchronize user data between your authentication provider and your application's database. ### Scenario overview _This scenario uses Clerk as an example of a typical third-party auth provider. With Neon Auth, you don't need to worry about manual sync or provider integration — Neon Auth handles it for you._ You are building a social media platform where users can create profiles, post content, and interact with others. You use Clerk as your authentication provider to handle user registration, login, and password management. Your application's database stores user profiles, posts, comments, and other social data. ### Data synchronization requirements - **User profiles**: When a user registers or updates their profile on Clerk, you need to synchronize their profile data to your application's database. This includes user ID, name, email, profile picture, and other relevant information. - **User deletion**: If a user deletes their account, you must remove their profile and associated data from your application's database. - **Data consistency**: Ensure that user data in your application's database remains consistent with the latest information from Clerk. Any changes to user profiles should reflect immediately in your database. ### Challenges with manual synchronization Without Neon Auth, you would typically address these requirements using manual synchronization methods like webhooks, polling, or login-time sync. However, these approaches introduce several challenges: - **Infrastructure and maintenance burden**: Setting up and maintaining a robust synchronization system manually involves significant infrastructure overhead. This includes configuring secure webhook endpoints, managing job queues for retries and background processing, and deploying worker processes – all adding to operational complexity. Consider the example of a webhook handler, demonstrating just a fraction of the code needed for basic user synchronization and validation: ```typescript // Webhook handler for a `user.created` event import { WebhookEvent, UserJSON } from '@clerk/nextjs/server'; import { headers } from 'next/headers'; import { Webhook } from 'svix'; import { db } from '@/app/db/server'; import { User, users } from '@/app/schema'; const webhookSecret = process.env.CLERK_WEBHOOK_SECRET || ''; async function validateRequest(request: Request) { const payloadString = await request.text(); const headerPayload = await headers(); const svixHeaders = { 'svix-id': headerPayload.get('svix-id')!, 'svix-timestamp': headerPayload.get('svix-signature')!, 'svix-signature': headerPayload.get('svix-signature')!, }; const wh = new Webhook(webhookSecret); return wh.verify(payloadString, svixHeaders) as WebhookEvent; } export async function POST(request: Request) { const payload = await validateRequest(request); const payloadData = payload.data as UserJSON; const user = { userId: payload.data.id, name: `${payloadData.first_name} ${payloadData.last_name}`, email: payloadData.email_addresses[0].email_address, } as User; await db.insert(users).values(user); return Response.json({ message: 'User added' }); } ``` **Important** Complexity Multiplies with Event Types: Crucially, this code only handles a single event: `user.created`. *To achieve complete synchronization, you would need to write separate webhook handlers for `user.updated`, `user.deleted`, and potentially other event types* (like role changes, email updates, profile changes, etc.), depending on your application's needs and the capabilities of your auth provider. Each new webhook handler multiplies the complexity of your synchronization system and introduces more potential points of failure. This quickly becomes a brittle and unwieldy system where, inevitably, **everything that is bound to fail will fail.** - **Development overhead**: Building custom synchronization logic requires significant development effort. You need to write code for event parsing, data mapping, database updates, and complex error handling. Polling and login-time sync, while alternatives, introduce their own complexities in terms of rate limit management, latency, and data consistency. - **Query inefficiency**: Without synchronized data, applications often resort to fetching user data from the auth provider API at runtime, leading to increased latency and complex queries. This dependency on external APIs can impact performance and reliability. - **Data Inconsistency risks**: Manual synchronization methods are inherently prone to inconsistencies. Webhook failures, polling delays, or errors in custom logic can lead to your database containing stale or inaccurate user data, potentially causing application errors and data integrity issues. ## Streamlining user data sync Neon Auth offers a streamlined solution to these challenges by automating user data synchronization. Let's examine how Neon Auth simplifies the process and eliminates the complexities associated with manual methods. ### Simplified Architecture Neon Auth introduces a simplified architecture that removes the need for webhooks, polling, and custom synchronization scripts. As shown in the diagram below, Neon Auth acts as an intermediary layer that automatically synchronizes user data to your Neon Postgres database. With Neon Auth, the architecture is significantly cleaner and more efficient: - **Automated synchronization**: Neon Auth handles the entire synchronization process automatically in the background. You no longer need to set up and maintain complex synchronization logic. - **No webhooks or polling**: No need to develop and maintain webhook endpoints for different user events (e.g., `user.created`, `user.updated`, `user.deleted`). Neon Auth automatically syncs user data changes to your database without requiring external triggers. - **Direct database access**: Your application can directly query user data from the `neon_auth.users_sync` table in your Neon Postgres database. This simplifies data access and improves query performance. - **Error handling and retries**: Neon Auth includes built-in error handling and retry mechanisms to ensure data consistency without requiring custom code. ### Enhanced data consistency Neon Auth ensures enhanced data consistency by providing a reliable and automated synchronization mechanism. Neon Auth continuously monitors for user data changes and automatically synchronizes these changes to your Neon Postgres database. ## Get started with Neon Auth Watch the following video to see how Neon Auth simplifies user data synchronization: ## Accessing synchronized user data With Neon Auth, accessing synchronized user data becomes incredibly straightforward. You can directly query the `neon_auth.users_sync` table within your Neon Postgres database. Neon Auth automatically creates and manages this table, populating it with user data from your connected authentication provider. The table schema including the following columns: - `id`: The unique user ID from your authentication provider. - `name`: The user's display name. - `email`: The user's email address. - `created_at`: Timestamp of user creation in the auth provider. - `updated_at`: Timestamp of the last user profile update. - `deleted_at`: Timestamp indicating user deletion (if applicable). - `raw_json`: A JSON column containing the full raw user data received from the authentication provider. You can query this table using standard SQL, just like any other table in your Postgres database. **Example Query:** To retrieve all user information, you can use a simple `SELECT` statement: ```sql SELECT * FROM neon_auth.users_sync; ``` This query will return a result set similar to the example below: | id | name | email | created_at | updated_at | deleted_at | raw_json | | ----------- | ------------- | ----------------- | ------------------- | ------------------- | ---------- | ---------------------------- | | d37b6a30... | Jordan Rivera | jordan@company.co | 2025-02-12 19:44... | null | null | \{"id": "d37b6a30...", ...\} | | 0153cc96... | Alex Kumar | alex@acme.com | 2025-02-12 19:44... | null | null | \{"id": "0153cc96...", ...\} | | 51e491df... | Sam Patel | sam@startup.dev | 2025-02-12 19:43... | 2025-02-12 19:46... | null | \{"id": "51e491df...", ...\} | **Efficient queries with JOINs**: You can easily join user data with other application tables to build complex queries. For example, to retrieve posts along with the author's name, you can use a simple `JOIN` statement: ```sql SELECT posts.*, neon_auth.users_sync.name AS author_name FROM posts JOIN neon_auth.users_sync ON posts.author_id = neon_auth.users_sync.id; ``` ## Conclusion Neon Auth streamlines user data synchronization, replacing cumbersome manual methods with an automated, efficient solution. This simplifies development, accelerates query performance, ensures data consistency, and minimizes infrastructure costs. By leveraging Neon Auth, you can focus on building your application's core features while leaving the complexities of user data management to Neon. --- # Source: https://neon.com/llms/reference-api-reference.txt # Neon API > The Neon API documentation outlines the endpoints and methods for interacting programmatically with Neon databases, enabling users to manage database instances, configurations, and operations through HTTP requests. ## Source - [Neon API HTML](https://neon.com/docs/reference/api-reference): The original HTML version of this documentation The Neon API allows you to manage your Neon projects programmatically. Refer to the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) for supported methods. The Neon API is a REST API. It provides resource-oriented URLs, accepts form-encoded request bodies, returns JSON-encoded responses, and supports standard HTTP response codes, authentication, and verbs. **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for the Neon API](https://neon.com/docs/ai/ai-rules-neon-api) to help your AI assistant understand authentication, rate limiting, and best practices when working with the Neon API. ## Authentication The Neon API uses API keys to authenticate requests. You can view and manage API keys for your account in the Neon Console. For instructions, refer to [Manage API keys](https://neon.com/docs/manage/api-keys). The client must send an API key in the Authorization header when making requests, using the bearer authentication scheme. For example: ```bash curl 'https://console.neon.tech/api/v2/projects' \ -H 'Accept: application/json' \ -H "Authorization: Bearer $NEON_API_KEY" \ -H 'Content-Type: application/json' \ ``` ## Neon API base URL The base URL for a Neon API request is: ```text https://console.neon.tech/api/v2/ ``` Append a Neon API method path to the base URL to construct the full URL for a request. For example: ```text https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id} ``` ## Using the Neon API reference to construct and execute requests You can use the [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) to execute Neon API requests. Select an endpoint, enter an API key token in the **Bearer** field in the **Authorization** section, and supply any required parameters and properties. For information about obtaining API keys, see [Manage API keys](https://neon.com/docs/manage/api-keys). The [Neon API reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) also provides request and response body examples that you can reference when constructing your own requests. For additional Neon API examples, refer to the following topics: - [Manage API keys with the Neon API](https://neon.com/docs/manage/api-keys#manage-api-keys-with-the-neon-api) - [Manage projects with the Neon API](https://neon.com/docs/manage/projects#manage-projects-with-the-neon-api) - [Manage branches with the Neon API](https://neon.com/docs/manage/branches#branching-with-the-neon-api) - [Manage computes with the Neon API](https://neon.com/docs/manage/computes#manage-computes-with-the-neon-api) - [Manage roles with the Neon API](https://neon.com/docs/manage/users#manage-roles-with-the-neon-api) - [Manage databases with the Neon API](https://neon.com/docs/manage/databases#manage-databases-with-the-neon-api) - [View operations with the Neon API](https://neon.com/docs/manage/operations#operations-and-the-neon-api) **Important**: When using the Neon API programmatically, you can poll the operation `status` to ensure that an operation is finished before proceeding with the next API request. For more information, see [Poll operation status](https://neon.com/docs/manage/operations#poll-operation-status). ## API rate limiting Neon limits API requests to 700 requests per minute (about 11 per second), with bursts allowed up to 40 requests per second per route, per account. If you exceed this, you'll receive an HTTP 429 Too Many Requests error. These limits apply to all public API requests, including those made by the Neon Console. Limits may change, so make sure your app handles 429 errors and retries appropriately. Contact support if you need higher limits. --- # Source: https://neon.com/llms/reference-cli-auth.txt # Neon CLI commands — auth > The Neon CLI commands documentation for 'auth' details the authentication processes and commands necessary for managing user access and credentials within the Neon database environment. ## Source - [Neon CLI commands — auth HTML](https://neon.com/docs/reference/cli-auth): The original HTML version of this documentation ## Before you begin Before running the `auth` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). ## The `auth` command Authenticates the user or caller to Neon. ### Usage ```bash neon auth ``` The command launches a browser window where you can authorize the Neon CLI to access your Neon account. After granting permissions to the Neon CLI, your credentials are saved locally to a configuration file named `credentials.json`, enabling you manage your account's projects from the command line. ```text /home//.config/neonctl/credentials.json ``` **Note**: If you use Neon through the [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration), you must authenticate connections from the CLI client using a Neon API key (see below). The `neon auth` command requires an account registered through Neon rather than Vercel. An alternative to authenticating using `neon auth` is to provide an API key when running a CLI command. You can do this using the global `--api-key` option or by setting the `NEON_API_KEY` variable. See [Global options](https://neon.com/docs/reference/neon-cli#global-options) for instructions. **Info**: The authentication flow for the Neon CLI follows this order: - If the `--api-key` option is provided, it takes precedence and is used for authentication. - If the `--api-key` option is not provided, the `NEON_API_KEY` environment variable is used if it is set. - If both `--api-key` option and `NEON_API_KEY` environment variable are not provided or set, the CLI falls back to the `credentials.json` file created by the `neon auth` command. - If the credentials file is not found, the Neon CLI initiates the `neon auth` web authentication process. #### Options Only [global options](https://neon.com/docs/reference/neon-cli#global-options) apply. --- # Source: https://neon.com/llms/reference-cli-branches.txt # Neon CLI commands — branches > The document details Neon CLI commands for managing branches, including creating, listing, and deleting branches within the Neon database environment. ## Source - [Neon CLI commands — branches HTML](https://neon.com/docs/reference/cli-branches): The original HTML version of this documentation ## Before you begin - Before running the `branches` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `branches` command The `branches` command allows you to list, create, rename, delete, and retrieve information about branches in your Neon project. It also permits setting a branch as the default branch, adding a compute to a branch, adding a [read replica](https://neon.com/docs/introduction/read-replicas), or perforning a [schema diff](https://neon.com/docs/guides/schema-diff) between different branches. ## Usage ```bash neon branches [options] ``` | Subcommand | Description | | --------------------------------- | -------------------------------------------- | | [list](https://neon.com/docs/reference/cli-branches#list) | List branches | | [create](https://neon.com/docs/reference/cli-branches#create) | Create a branch | | [reset](https://neon.com/docs/reference/cli-branches#reset) | Reset data to parent | | [restore](https://neon.com/docs/reference/cli-branches#restore) | Restore a branch to a selected point in time | | [rename](https://neon.com/docs/reference/cli-branches#rename) | Rename a branch | | [schema-diff](https://neon.com/docs/reference/cli-branches#schema-diff) | Compare schemas | | [set-default](https://neon.com/docs/reference/cli-branches#set-default) | Set a default branch | | [set-expiration](https://neon.com/docs/reference/cli-branches#set-expiration) | Set expiration date for a branch | | [add-compute](https://neon.com/docs/reference/cli-branches#add-compute) | Add replica to a branch | | [delete](https://neon.com/docs/reference/cli-branches#delete) | Delete a branch | | [get](https://neon.com/docs/reference/cli-branches#get) | Get a branch | ## list This subcommand allows you to list branches in a Neon project. #### Usage ```bash neon branches list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `list` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Examples - List branches with the default `table` output format. The information provided with this output format is limited compared to other formats, such as `json`. ```bash neon branches list --project-id solitary-leaf-288182 ┌────────────────────────┬─────────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Created At │ Updated At │ ├────────────────────────┼─────────────┼──────────────────────┼──────────────────────┤ │ br-small-meadow-878874 │ production │ 2023-07-06T13:15:12Z │ 2023-07-06T14:26:32Z │ ├────────────────────────┼─────────────┼──────────────────────┼──────────────────────┤ │ br-round-queen-335380 │ development │ 2023-07-06T14:45:50Z │ 2023-07-06T14:45:50Z │ └────────────────────────┴─────────────┴──────────────────────┴──────────────────────┘ ``` - List branches with the `json` output format. This format provides more information than the default `table` output format. ```bash neon branches list --project-id solitary-leaf-288182 --output json [ { "id": "br-wild-boat-648259", "project_id": "solitary-leaf-288182", "name": "production", "current_state": "ready", "logical_size": 29515776, "creation_source": "console", "default": true, "cpu_used_sec": 78, "compute_time_seconds": 78, "active_time_seconds": 312, "written_data_bytes": 107816, "data_transfer_bytes": 0, "created_at": "2023-07-09T17:01:34Z", "updated_at": "2023-07-09T17:15:13Z" }, { "id": "br-shy-cake-201321", "project_id": "solitary-leaf-288182", "parent_id": "br-wild-boat-648259", "parent_lsn": "0/1E88838", "name": "development", "current_state": "ready", "creation_source": "console", "default": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2023-07-09T17:37:10Z", "updated_at": "2023-07-09T17:37:10Z" } ] ``` ## create This subcommand allows you to create a branch in a Neon project. #### Usage ```bash neon branches create [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `create` subcommand supports these options: | Option | Description | Type | Required | | :------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--name` | The branch name | string | | | `--parent` | Parent branch name, id, timestamp, or LSN. Defaults to the default branch | string | | | `--compute` | Create a branch with or without a compute. By default, the branch is created with a read-write endpoint. The default value is `true`. To create a branch without a compute, use `--no-compute` | boolean | | | `--type` | Type of compute to add. Choices are `read_write` (the default) or `read_only`. A read-only compute is a [read replica](https://neon.com/docs/introduction/read-replicas). | string | | | `--suspend-timeout` | Duration of inactivity in seconds after which the compute is automatically suspended. The value `0` means use the global default. The value `-1` means never suspend. The default value is `300` seconds (5 minutes). The maximum value is `604800` seconds (1 week). | number | | | `--cu` | The number of Compute Units. Could be a fixed size (e.g. "2") or a range delimited by a dash (e.g. "0.5-3"). | string | | | `--psql` | Connect to a new branch via `psql`. `psql` must be installed to use this option. | boolean | | | `--schema-only` | Create a schema-only branch. Requires exactly one read-write compute. | boolean | | | `--expires-at` | Set an expiration timestamp (RFC 3339 format) for automatic branch deletion. The branch and its compute endpoints are permanently deleted at the specified time. | string | | **Note**: When creating a branch from a protected parent branch, role passwords on the child branch are changed. For more information about this Protected Branches feature, see [New passwords generated for Postgres roles on child branches](https://neon.com/docs/guides/protected-branches#new-passwords-generated-for-postgres-roles-on-child-branches). #### Examples - Create a branch: ```bash neon branches create ┌─────────────────────────┬─────────────────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├─────────────────────────┼─────────────────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-mute-sunset-67218628 │ br-mute-sunset-67218628 │ false │ 2023-08-03T20:07:27Z │ 2023-08-03T20:07:27Z │ └─────────────────────────┴─────────────────────────┴─────────┴──────────────────────┴──────────────────────┘ endpoints ┌───────────────────────────┬──────────────────────┐ │ Id │ Created At │ ├───────────────────────────┼──────────────────────┤ │ ep-floral-violet-94096438 │ 2023-08-03T20:07:27Z │ └───────────────────────────┴──────────────────────┘ connection_uris ┌──────────────────────────────────────────────────────────────────────────────────────────┐ │ Connection Uri │ ├──────────────────────────────────────────────────────────────────────────────────────────┤ │ postgresql://[user]:[password]@[neon_hostname]/[dbname] │ └──────────────────────────────────────────────────────────────────────────────────────────┘ ``` **Note**: If the parent branch has more than one role or database, the `branches create` command does not output a connection URI. As an alternative, you can use the `connection-string` command to retrieve the connection URI for a branch. This command includes options for specifying the role and database. See [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string). - Create a branch with the `--output` format of the command set to `json`. This output format returns all of the branch response data, whereas the default `table` output format (shown in the preceding example) is limited in the information it can display. ```bash neon branches create --output json ``` Details: Example output ```json { "branch": { "id": "br-frosty-art-30264288", "project_id": "polished-shape-60485499", "parent_id": "br-polished-fire-02083731", "parent_lsn": "0/1E887C8", "name": "br-frosty-art-30264288", "current_state": "init", "pending_state": "ready", "creation_source": "neonctl", "default": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2023-08-03T20:12:24Z", "updated_at": "2023-08-03T20:12:24Z" }, "endpoints": [ { "host": "@ep-cool-darkness-123456.us-east-2.aws.neon.tech", "id": "@ep-cool-darkness-123456", "project_id": "polished-shape-60485499", "branch_id": "br-frosty-art-30264288", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 1, "region_id": "aws-us-east-2", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": {}, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "neonctl", "created_at": "2023-08-03T20:12:24Z", "updated_at": "2023-08-03T20:12:24Z", "proxy_host": "us-east-2.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-pod" } ], "connection_uris": [ { "connection_uri": "postgresql://alex:AbC123dEf@@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require", "connection_parameters": { "database": "dbname", "password": "AbC123dEf", "role": "alex", "host": "@ep-cool-darkness-123456.us-east-2.aws.neon.tech", "pooler_host": "@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech" } } ] } ``` - Create a branch with a user-defined name: ```bash neon branches create --name feature/user-auth ``` - Set the compute size when creating a branch: ```bash neon branches create --name mybranch --cu 2 ``` - Set the compute's autoscaling range when creating a branch: ```bash neon branches create --name mybranch --cu 0.5-3 ``` - Create a branch with a [read replica](https://neon.com/docs/introduction/read-replicas) compute. ```bash neon branches create --name my_read_replica_branch --type read_only ``` - Create a branch from a parent branch other than your `production` branch ```bash neon branches create --name feature/payment-api --parent development ``` - Create an instant restore branch by specifying the `--parent` option with a timestamp: ```bash neon branches create --name data_recovery --parent 2023-07-11T10:00:00Z ``` The timestamp must be provided in RFC 3339 format. You can use this [timestamp converter](https://it-tools.tech/date-converter). For more information about instant restore, see [Instant restore](https://neon.com/docs/guides/branch-restore). - Create a branch and connect to it with `psql`. ```bash neon branch create --psql ``` - Create a branch, connect to it with `psql`, and run an `.sql` file. ```bash neon branch create --psql -- -f dump.sql ``` - Create a branch, connect to it with `psql`, and run a query. ```bash neon branch create --psql -- -c "SELECT version()" ``` - Create a schema-only branch: ```bash neon branch create --schema-only ``` ## reset This command resets a child branch to the latest data from its parent. #### Usage ```bash neon branches reset --parent ``` `` refers to the branch ID or branch name. You can use either one for this operation. `--parent` specifies the type of reset operation. Currently, Neon only supports reset from parent. This parameter is required for the operation to work. In the future, Neon might add support for other reset types: for example, rewinding a branch to an earlier period in time. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `reset` subcommand supports these options: | Option | Description | Type | Required | | ----------------------- | --------------------------------------------------------------------------------------------- | ------- | :-----------------------------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project or context is not set | | `--parent` | Reset to a parent branch | boolean | | | `--preserve-under-name` | The name under which to preserve the old branch | string | | #### Example ```bash neon branches reset development --parent ┌──────────────────────┬────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Last Reset At │ ├──────────────────────┼────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-aged-sun-a5qowy01 │ development│ false │ 2024-05-07T09:31:59Z │ 2024-05-07T09:36:32Z │ └──────────────────────┴────────────┴─────────┴──────────────────────┴──────────────────────┘ ``` ## restore This command restores a branch to a specified point in time in its own or another branch's history. #### Usage ```bash neon branches restore [@(timestamp|lsn)] ``` `` specifies the ID or name of the branch that you want to restore. `` specifies the source branch you want to restore from. Options are: - `^self` — restores the selected branch to an earlier point in its own history. You must select a timestamp or LSN for this option (restoring to head is not an option). You also need to include a name for the backup branch using the parameter `preserve-under-name`. - `^parent` — restores the target branch to its parent. By default the target is restored the latest (head) of its parent. Append `@timestamp` or `@lsn` to restore to an earlier point in the parent's history. - `source branch ID` or `source branch name` — restores the target branch to the selected source branch. It restores the latest (head) by default. Append `@timestamp` or `@lsn` to restore to an earlier point in the source branch's history. #### Options In addition to the Neon CLI global options, the `restore` subcommand supports these options: | Option | Description | Type | Required | | ----------------------- | ------------------------------------------- | ------ | :-----------------------------------------------------------------------: | | `--context-file` | Context file path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project or context is not set | | `--preserve-under-name` | Name for the backup created during restore. | string | When restoring to `^self` | #### Examples Examples of the different kinds of restore operations you can do: - [Restoring a branch to an earlier point in its history](https://neon.com/docs/reference/cli-branches#restoring-a-branch-to-an-earlier-point-in-its-own-history-with-backup) - [Restoring to another branch's head](https://neon.com/docs/reference/cli-branches#restoring-a-branch-target-to-the-head-of-another-branch-source) - [Restoring a branch to its parent](https://neon.com/docs/reference/cli-branches#restoring-a-branch-to-its-parent-at-an-earlier-point-in-time) #### Restoring a branch to an earlier point in its own history (with backup) This command restores the branch `production` to an earlier timestamp, saving to a backup branch called `production_restore_backup_2024-02-20` ```bash neon branches restore production ^self@2024-05-06T10:00:00.000Z --preserve-under-name production_restore_backup_2024-05-06 ``` Results of the operation: ```bash INFO: Restoring branch br-purple-dust-a5hok5mk to the branch br-purple-dust-a5hok5mk timestamp 2024-05-06T10:00:00.000Z Restored branch ┌─────────────────────────┬──────┬──────────────────────┐ │ Id │ Name │ Last Reset At │ ├─────────────────────────┼──────┼──────────────────────┤ │ br-purple-dust-a5hok5mk │ main │ 2024-05-07T09:45:21Z │ └─────────────────────────┴──────┴──────────────────────┘ Backup branch ┌─────────────────────────┬────────────────────────────────┐ │ Id │ Name │ ├─────────────────────────┼────────────────────────────────┤ │ br-flat-forest-a5z016gm │ production_restore_backup_2024-05-06 │ └─────────────────────────┴────────────────────────────────┘ ``` #### Restoring a branch (target) to the head of another branch (source) This command restores the target branch `feature/user-auth` to latest data (head) from the source branch `production`. ```bash neon branches restore feature/user-auth production ``` Results of the operation: ```bash INFO: Restoring branch br-restless-frost-69810125 to the branch br-curly-bar-82389180 head Restored branch ┌────────────────────────────┬───────────────────┬──────────────────────┐ │ Id │ Name │ Last Reset At │ ├────────────────────────────┼───────────────────┼──────────────────────┤ │ br-restless-frost-69810125 │ feature/user-auth │ 2024-02-21T15:42:34Z │ └────────────────────────────┴───────────────────┴──────────────────────┘ ``` #### Restoring a branch to its parent at an earlier point in time This command restores the branch `feature/user-auth` to a selected point in time from its parent branch. ```bash neon branches restore feature/user-auth ^parent@2024-02-21T10:30:00.000Z ``` Results of the operation: ```bash INFO: Restoring branch br-restless-frost-69810125 to the branch br-patient-union-a5s838zf timestamp 2024-02-21T10:30:00.000Z Restored branch ┌────────────────────────────┬───────────────────┬──────────────────────┐ │ Id │ Name │ Last Reset At │ ├────────────────────────────┼───────────────────┼──────────────────────┤ │ br-restless-frost-69810125 │ feature/user-auth │ 2024-02-21T15:55:04Z │ └────────────────────────────┴───────────────────┴──────────────────────┘ ``` ## rename This subcommand allows you to update a branch in a Neon project. #### Usage ```bash neon branches rename [options] ``` `` refers to the Branch ID and branch name. You can specify one or the other. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `rename` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon branches rename mybranch teambranch ┌───────────────────────┬────────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Created At │ Updated At │ ├───────────────────────┼────────────┼──────────────────────┼──────────────────────┤ │ br-rough-sound-590393 │ teambranch │ 2023-07-09T20:46:58Z │ 2023-07-09T21:02:27Z │ └───────────────────────┴────────────┴──────────────────────┴──────────────────────┘ ``` ## schema-diff This command: - Compares the latest schemas of any two branches - Compares against a specific point in its own or another branch's history #### Usage ``` neon branches schema-diff [base-branch] [compare-source[@(timestamp|lsn)]] ``` `[base-branch]` specifies the branch you want to compare against. For example, if you want to compare a development branch against the production branch `production`, select `production` as your base. This setting is **optional**. If you leave it out, the operation uses either of the following as the base: - The branch identified in the `set-context` file - If no context is configured, it uses your project's default branch `[compare-source]` specifies the branch or state to compare against. Options are: - `^self` — compares the selected branch to an earlier point in its own history. You must specify a timestamp or LSN. - `^parent` — compares the selected branch to the head of its parent branch. You can append `@timestamp` or `@lsn` to compare to an earlier point in the parent's history. - `` — compares the selected branch to the head of another specified branch. Append `@timestamp` or `@lsn` to compare to an earlier point in the specified branch's history. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `schema-diff` subcommand supports these options: | Option | Description | Type | Required | | -------------------- | --------------------------------------------------------------------------------------------- | ------ | :-----------------------------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project or context is not set | | `--database`, `--db` | Name of the database for which the schema comparison is performed | string | | **Note**: The `--no-color` or `--color false` [global option](https://neon.com/docs/reference/neon-cli#global-options) can be used to decolorize the CLI command output when using CLI commands in CI/CD pipelines. #### Examples Examples of different kinds of schema diff operations you can do: - [Compare to another branch's head](https://neon.com/docs/reference/cli-branches#compare-to-another-branchs-head) - [Compare to an earlier point in a branch's history](https://neon.com/docs/reference/cli-branches#comparing-a-branch-to-an-earlier-point-in-its-history) - [Compare a branch to its parent](https://neon.com/docs/reference/cli-branches#comparing-a-branch-to-its-parent) - [Compare to an earlier point in another branch's history](https://neon.com/docs/reference/cli-branches#comparing-a-branch-to-an-earlier-point-in-another-branchs-history) #### Compare to another branch's head This command compares the schema of the `production` branch to the head of the branch `development`. ```bash neon branches schema-diff production development ``` The output indicates that in the table `public.playing_with_neon`, a new column `description character varying(255)` has been added in the `development` branch that is not present in the `production` branch. ```diff --- Database: neondb (Branch: br-wandering-firefly-a50un462) +++ Database: neondb (Branch: br-fancy-sky-a5cydw8p) @@ -26,9 +26,10 @@ CREATE TABLE public.playing_with_neon ( id integer NOT NULL, name text NOT NULL, - value real [!code --] + value real, + description character varying(255) ); ``` #### Comparing a branch to an earlier point in its history This command compares the schema of `feature/user-auth` to a previous state in its history at LSN 0/123456. ```bash neon branches schema-diff feature/user-auth ^self@0/123456 ``` #### Comparing a branch to its parent This command compares the schema of `feature/user-auth` to the head of its parent branch. ```bash neon branches schema-diff feature/user-auth ^parent ``` #### Comparing a branch to an earlier point in another branch's history This command compares the schema of the `production` branch to the state of the `feature/payment-api` branch at timestamp `2024-06-01T00:00:00.000Z`. ```bash neon branches schema-diff production feature/payment-api@2024-06-01T00:00:00.000Z ``` ## set-default This subcommand allows you to set a branch as the default branch in your Neon project. #### Usage ```bash neon branches set-default [options] ``` `` refers to the Branch ID and branch name. You can specify one or the other. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `set-default` subcommand supports this option: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon branches set-default mybranch ┌────────────────────┬──────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├────────────────────┼──────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-odd-frog-703504 │ mybranch │ true │ 2023-07-11T12:22:12Z │ 2023-07-11T12:22:59Z │ └────────────────────┴──────────┴─────────┴──────────────────────┴──────────────────────┘ ``` ## set-expiration This subcommand allows you to set or update the expiration date for a branch. When the expiration time is reached, the branch and its compute endpoints are permanently deleted. #### Usage ```bash neon branches set-expiration --expires-at [options] ``` `` refers to the Branch ID and branch name. You can specify one or the other. `--expires-at ` specifies the expiration timestamp in RFC 3339 format (e.g., `2025-08-15T18:00:00Z`). #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `set-expiration` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--expires-at` | Expiration timestamp in RFC 3339 format | string | | #### Examples - Set an expiration date for a branch: ```bash neon branches set-expiration mybranch --expires-at 2025-08-15T18:00:00Z ``` - Remove expiration from a branch (omit the parameter): ```bash neon branches set-expiration mybranch ``` ## add-compute This subcommand allows you to add a compute to an existing branch in your Neon project. #### Usage ```bash neon branches add-compute ``` `` refers to the Branch ID and branch name. You can specify one or the other. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `add-compute` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--type` | Type of compute to add. Choices are `read_only` (the default) or `read_write`. A read-only compute is a [read replica](https://neon.com/docs/introduction/read-replicas). A branch can have a single primary read-write compute and multiple read replica computes. | string | | | `--cu` | Sets the compute size in Compute Units. For a fixed size, enter a single number (e.g., "2"). For autoscaling, enter a range with a dash (e.g., "0.5-3"). | string | | #### Examples - Add a read replica compute (a read replica) to a branch: ```bash neon branches add-compute mybranch --type read_only ┌─────────────────────┬──────────────────────────────────────────────────┐ │ Id │ Host │ ├─────────────────────┼──────────────────────────────────────────────────┤ │ ep-rough-lab-865061 │ ep-rough-lab-865061.ap-southeast-1.aws.neon.tech │ └─────────────────────┴──────────────────────────────────────────────────┘ ``` - Set the compute size when adding a compute to a branch: ```bash neon branches add-compute main --cu 2 ``` - Set the compute's autoscaling range when adding a compute to a branch: ```bash neon branches add-compute main --cu 0.5-3 ``` ## delete This subcommand allows you to delete a branch in a Neon project. #### Usage ```bash neon branches delete [options] ``` `` refers to the Branch ID and branch name. You can specify one or the other. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `delete` subcommand supports this option: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon branches delete br-rough-sky-158193 ┌─────────────────────┬─────────────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Created At │ Updated At │ ├─────────────────────┼─────────────────┼──────────────────────┼──────────────────────┤ │ br-rough-sky-158193 │ my_child_branch │ 2023-07-09T20:57:39Z │ 2023-07-09T21:06:41Z │ └─────────────────────┴─────────────────┴──────────────────────┴──────────────────────┘ ``` ## get This subcommand allows you to retrieve details about a branch. #### Usage ```bash neon branches get [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `get` subcommand supports this option: #### Options | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Examples ```bash neon branches get production ┌────────────────────────┬────────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Created At │ Updated At │ ├────────────────────────┼────────────┼──────────────────────┼──────────────────────┤ │ br-small-meadow-878874 │ production │ 2023-07-06T13:15:12Z │ 2023-07-06T13:32:37Z │ └────────────────────────┴────────────┴──────────────────────┴──────────────────────┘ ``` A `get` example with the `--output` format option set to `json`: ```bash neon branches get production --output json { "id": "br-lingering-bread-896475", "project_id": "noisy-rain-039137", "name": "production", "current_state": "ready", "logical_size": 29769728, "creation_source": "console", "default": false, "cpu_used_sec": 522, "compute_time_seconds": 522, "active_time_seconds": 2088, "written_data_bytes": 174433, "data_transfer_bytes": 20715, "created_at": "2023-06-28T10:17:28Z", "updated_at": "2023-07-11T12:22:59Z" } ``` --- # Source: https://neon.com/llms/reference-cli-completion.txt # Neon CLI commands — completion > The document details the Neon CLI command completion feature, explaining how to enable and use shell command completion for efficient command-line operations within the Neon environment. ## Source - [Neon CLI commands — completion HTML](https://neon.com/docs/reference/cli-completion): The original HTML version of this documentation ## Before you begin Before running the `completion` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). ## The `completion` command This command generates a completion script for the `neonctl` command-line interface (CLI). The completion script, when installed, helps you type `neon` commands faster and more accurately. It does this by presenting the possible commands and options when you press the **tab** key after typing or partially typing a command or option. ### Usage ```bash neon completion ``` The command outputs a completion script similar to the one shown below. **Important**: Use the completion script that is output to your terminal or command window, as the script may differ depending on your operating environment. ```text ###-begin-neonctl-completions-### # # yargs command completion script # # Installation: neonctl completion >> ~/.bashrc # or neonctl completion >> ~/.bash_profile on OSX. # _neonctl_yargs_completions() { local cur_word args type_list cur_word="${COMP_WORDS[COMP_CWORD]}" args=("${COMP_WORDS[@]}") # ask yargs to generate completions. type_list=$(neonctl --get-yargs-completions "${args[@]}") COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) ) # if no match was found, fall back to filename completion if [ ${#COMPREPLY[@]} -eq 0 ]; then COMPREPLY=() fi return 0 } complete -o bashdefault -o default -F _neonctl_yargs_completions neonctl ###-end-neonctl-completions-### ``` Use the commands provided below to add the completion script to your shell configuration file, which is typically found in your `home` directory. Your shell configuration file may differ by platform. For example, on Ubuntu, you should have a `.bashrc` file, and on macOS, you might have `bash_profile` or `.zshrc` file. The `source` command causes the changes to take effect immediately in the current shell session. Tab: bashrc ```bash neon completion >> ~/.bashrc source ~/.bashrc ``` Tab: bash_profile ```bash neon completion >> ~/.bash_profile source ~/.bash_profile ``` Tab: profile ```bash neon completion >> ~/.profile source ~/.profile ``` Tab: zshrc ```bash neon completion >> ~/.zshrc source ~/.zshrc ``` --- # Source: https://neon.com/llms/reference-cli-connection-string.txt # Neon CLI commands — connection-string > The document details the Neon CLI command for generating a connection string, enabling users to connect to their Neon database instances efficiently. ## Source - [Neon CLI commands — connection-string HTML](https://neon.com/docs/reference/cli-connection-string): The original HTML version of this documentation ## Before you begin - Before running the `connection-string` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about connecting to Neon, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## The `connection-string` command This command gets a Postgres connection string for connecting to a database in your Neon project. You can construct a connection string for any database in any branch. The connection string includes the password for the specified role. ### Usage ```bash neon connection-string [branch[@timestamp|@LSN]] [options] ``` `branch` specifies the branch name or ID. If a branch name or ID is omitted, the default branch is used. `@timestamp|@LSN` is used to specify a specific point in the branch's history for time travel connections. If omitted, the current state (HEAD) is used. ### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `connection-string` command supports these options: | Option | Description | Type | Required | | ----------------- | ---------------------------------------------------------------------------------------------------- | ------- | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--role-name` | Role name | string | Only if your branch has more than one role | | `--database-name` | Database name | string | Only if your branch has more than one database | | `--pooled` | Construct a pooled connection. The default is `false`. | boolean | | | `--prisma` | Construct a connection string for use with Prisma. The default is `false`. | boolean | | | `--endpoint-type` | The compute type. The default is `read-write`. The choices are `read_only` and `read_write` | string | | | `--extended` | Show extended information. The default is `false`. | boolean | | | `--psql` | Connect to a database via psql using connection string. `psql` must be installed to use this option. | boolean | | ### Examples - Get a basic connection string for the current project, branch, and database: ```bash neon connection-string mybranch postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` - Get a pooled connection string for the current project, branch, and database with the `--pooled` option. This option adds a `-pooler` flag to the host name which enables connection pooling for clients that use this connection string. ```bash neon connection-string --pooled postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` - Get a connection string for use with Prisma for the current project, branch, and database. The `--prisma` options adds `connect_timeout=30` option to the connection string to ensure that connections from Prisma Client do not timeout. ```bash neon connection-string --prisma postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require&connect_timeout=30 ``` - Get a connection string to a specific point in a branch's history by appending `@timestamp` or `@lsn`. Availability depends on your configured [restore window](https://neon.com/docs/manage/projects#configure-restore-window) window. ```bash neon connection-string @2024-04-21T00:00:00Z ``` For additional examples, see [How to use Time Travel](https://neon.com/docs/guides/time-travel-assist#how-to-use-time-travel). - Get a connection string and connect with `psql`. ```bash neon connection-string --psql ``` - Get a connection string, connect with `psql`, and run an `.sql` file. ```bash neon connection-string --psql -- -f dump.sql ``` - Get a connection string, connect with `psql`, and run a query. ```bash neon connection-string --psql -- -c "SELECT version()" ``` --- # Source: https://neon.com/llms/reference-cli-databases.txt # Neon CLI commands — databases > The document details Neon CLI commands for managing databases, including creating, listing, and deleting databases within the Neon environment. ## Source - [Neon CLI commands — databases HTML](https://neon.com/docs/reference/cli-databases): The original HTML version of this documentation ## Before you begin - Before running the `databases` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about databases in Neon, see [Manage databases](https://neon.com/docs/manage/databases). ## The `databases` command ### Usage The `databases` command allows you to list, create, and delete databases in a Neon project. | Subcommand | Description | | ----------------- | ----------------- | | [list](https://neon.com/docs/reference/cli-databases#list) | List databases | | [create](https://neon.com/docs/reference/cli-databases#create) | Create a database | | [delete](https://neon.com/docs/reference/cli-databases#delete) | Delete a database | ### list This subcommand allows you to list databases. #### Usage ```bash neon databases list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `list` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | If a branch ID or name is not provided, the command lists databases for the default branch of the project. #### Example ```bash neon databases list --branch br-autumn-dust-190886 ┌────────┬────────────┬──────────────────────┐ │ Name │ Owner Name │ Created At │ ├────────┼────────────┼──────────────────────┤ │ neondb │ daniel │ 2023-06-19T18:27:19Z │ └────────┴────────────┴──────────────────────┘ ``` ### create This subcommand allows you to create a database. #### Usage ```bash neon databases create [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `create` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | | `--name` | The name of the database | string | ✓ | | `--owner-name` | The name of the role that owns the database | string | | - If a branch ID or name is not provided, the command creates the database in the default branch of the project. - If the `--owner-name` option is not specified, the current user becomes the database owner. #### Example ```bash neon databases create --name mynewdb --owner-name john ┌─────────┬────────────┬──────────────────────┐ │ Name │ Owner Name │ Created At │ ├─────────┼────────────┼──────────────────────┤ │ mynewdb │ john │ 2023-06-19T23:45:45Z │ └─────────┴────────────┴──────────────────────┘ ``` ### delete This subcommand allows you to delete a database. #### Usage ```bash neon databases delete [options] ``` `` is the database name. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `delete` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | If a branch ID or name is not provided, it is assumed the database resides in the default branch of the project. #### Example ```bash neon databases delete mydb ┌─────────┬────────────┬──────────────────────┐ │ Name │ Owner Name │ Created At │ ├─────────┼────────────┼──────────────────────┤ │ mydb │ daniel │ 2023-06-19T23:45:45Z │ └─────────┴────────────┴──────────────────────┘ ``` --- # Source: https://neon.com/llms/reference-cli-init.txt # Neon CLI commands — init > The Neon CLI commands documentation for "init" details the process of initializing a new Neon project, including setting up the necessary configuration files and environment for development. ## Source - [Neon CLI commands — init HTML](https://neon.com/docs/reference/cli-init): The original HTML version of this documentation ## Before you begin - Before running the `init` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/neon-cli#install-the-neon-cli). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `init` command The `init` command installs the Neon MCP (Model Context Protocol) Server and authenticates it to Neon using a Neon API key. ### Usage #### From the CLI: You can run it from the Neon CLI to install the Neon MCP (Model Context Protocol) Server and authenticate. ```bash neon init ``` #### npx You can also run the `init` command in the root directory of your app with `npx` instead of installing the Neon CLI locally: ```bash npx neonctl@latest init ``` After running the command, you can ask your Cursor chat to "Get started with Neon using MCP Resource", as shown in the example below. The Neon MCP Server uses AI rules defined in [neon-get-started.mdc](https://github.com/neondatabase-labs/ai-rules/blob/main/neon-get-started.mdc) to help you get started with Neon, including helping you configure a database connection. ### Options This command supports [global options](https://neon.com/docs/reference/neon-cli#global-options) only. ## Example Navigate to the root directory of your application and run the `neonctl@latest init` command: ```bash cd /path/to/your/app npx neonctl@latest init ``` The command outputs progress as it completes each step: ```bash npx neonctl@latest init ┌ Adding Neon to your project │ ◒ Authenticating. ┌────────┬──────────────────┬────────┬────────────────┐ │ Login │ Email │ Name │ Projects Limit │ ├────────┼──────────────────┼────────┼────────────────┤ │ alex │ alex@domain.com │ Alex │ 20 │ └────────┴──────────────────┴────────┴────────────────┘ ◇ Authentication successful ✓ │ ◇ Installed Neon MCP server │ ◇ Success! Neon is now ready to use with Cursor. │ │ ◇ What's next? ────────────────────────────────────────────────────────────────────────────╮ │ │ │ Restart Cursor and ask Cursor to "Get started with Neon using MCP Resource" in the chat │ │ │ ├───────────────────────────────────────────────────────────────────────────────────────────╯ │ └ Have feedback? Email us at feedback@neon.tech ``` ## AI Assistant Support This feature is currently in beta for Cursor, with VS Code and Claude Code support coming soon. --- # Source: https://neon.com/llms/reference-cli-install.txt # Neon CLI — Install and connect > The document details the installation and connection process for the Neon CLI, guiding users through setting up and accessing Neon's cloud-native PostgreSQL database services. ## Source - [Neon CLI — Install and connect HTML](https://neon.com/docs/reference/cli-install): The original HTML version of this documentation This section describes how to install the Neon CLI and connect via web authentication or API key. Tab: macOS **Install with [Homebrew](https://formulae.brew.sh/formula/neonctl)** ```bash brew install neonctl ``` **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` Requires [Node.js 18.0](https://nodejs.org/en/download/) or higher. **Install with bun** ```bash bun install -g neonctl ``` **macOS binary** Download the binary. No installation required. ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-macos -o neonctl ``` Run the CLI from the download directory: ```bash neon [options] ``` Tab: Windows **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` Requires [Node.js 18.0](https://nodejs.org/en/download/) or higher. **Install with bun** ```bash bun install -g neonctl ``` **Windows binary** Download the binary. No installation required. ```bash curl -sL -O https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-win.exe ``` Run the CLI from the download directory: ```bash neonctl-win.exe [options] ``` Tab: Linux **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` **Linux binary** Download the x64 or ARM64 binary, depending on your processor type. No installation required. x64: ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-linux-x64 -o neonctl ``` ARM64: ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-linux-arm64 -o neonctl ``` Run the CLI from the download directory: ```bash neon [options] ``` **Note** Use the Neon CLI without installing: You can run the Neon CLI without installing it using **npx** (Node Package eXecute) or the `bun` equivalent, **bunx**. For example: ```shell # npx npx neonctl # bunx bunx neonctl ``` ### Upgrade When a new version is released, you can update your Neon CLI using the methods described below, depending on how you installed the CLI initially. To check for the latest version, refer to the **Releases** information on the [Neon CLI GitHub repository](https://github.com/neondatabase/neonctl) page. To check your installed version of the Neon CLI, run the following command: ```bash neon --version ``` Tab: npm To upgrade the Neon CLI via [npm](https://www.npmjs.com/package/neonctl): ```shell npm update -g neonctl ``` Tab: Homebrew To upgrade the Neon CLI with [Homebrew](https://formulae.brew.sh/formula/neonctl): ```bash brew upgrade neonctl ``` Tab: Binary To upgrade a [binary](https://github.com/neondatabase/neonctl/releases) version, download the `latest` binary as described in the install instructions above, and replace your old binary with the new one. If you're using the Neon CLI in CI/CD tools like GitHub Actions, you can safely pin the Neon CLI to `latest`, as we prioritize stability for CI/CD processes. Tab: npm In your GitHub Actions workflow, you can use the `latest` tag with `npm`: ```yaml - name: Install Neon CLI run: npm install -g neonctl@latest ``` Tab: Homebrew Homebrew automatically fetches the latest version when running the `install` or `upgrade` command. You can include the following in your workflow: ```yaml - name: Install Neon CLI run: brew install neonctl || brew upgrade neonctl ``` Tab: Binary If you're downloading a binary, reference the latest release from the [Releases page](https://github.com/neondatabase/neonctl/releases). For example, you can use `curl` or `wget` in your workflow: ```yaml - name: Install Neon CLI run: | curl -L https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-linux-amd64 -o /usr/local/bin/neon chmod +x /usr/local/bin/neon ``` ## Connect The Neon CLI supports connecting via web authentication or API key. ### Web authentication Run the following command to connect to Neon via web authentication: ```bash neon auth ``` The [neon auth](https://neon.com/docs/reference/cli-auth) command launches a browser window where you can authorize the Neon CLI to access your Neon account. If you have not authenticated previously, running a Neon CLI command automatically launches the web authentication process unless you have specified an API key. **Note**: If you use Neon through the [Vercel-Managed Integration](https://neon.com/docs/guides/vercel-managed-integration), you must authenticate connections from the CLI client using a Neon API key (see below). The `neon auth` command requires an account registered through Neon rather than Vercel. ### API key To authenticate with a Neon API key, you can specify the `--api-key` option when running a Neon CLI command. For example, the following `neon projects list` command authenticates to Neon using the `--api-key` option: ```bash neon projects list --api-key ``` To avoid including the `--api-key` option with each CLI command, you can export your API key to the `NEON_API_KEY` environment variable. ```bash export NEON_API_KEY= ``` For information about obtaining an Neon API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). ## Configure autocompletion The Neon CLI supports autocompletion, which you can configure in a few easy steps. See [Neon CLI commands — completion](https://neon.com/docs/reference/cli-completion) for instructions. --- # Source: https://neon.com/llms/reference-cli-ip-allow.txt # Neon CLI commands — ip-allow > The document details the usage of the `ip-allow` command in the Neon CLI, which manages IP allowlists for controlling access to Neon database instances. ## Source - [Neon CLI commands — ip-allow HTML](https://neon.com/docs/reference/cli-ip-allow): The original HTML version of this documentation ## Before you begin - Before running the `ip-allow` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about Neon's **IP Allow** feature, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## The `ip-allow` command The `ip-allow` command allows you to perform `list`, `add`, `remove`, and `reset` actions on the IP allowlist for your Neon project. You can define an allowlist with individual IP addresses, IP ranges, or [CIDR notation](https://neon.com/docs/reference/glossary#cidr-notation). ### Usage ```bash neon ip-allow [options] ``` | Subcommand | Description | | ----------------- | ----------------------------------------- | | [list](https://neon.com/docs/reference/cli-ip-allow#list) | List the IP allowlist | | [add](https://neon.com/docs/reference/cli-ip-allow#add) | Add IP addresses to the IP allowlist | | [remove](https://neon.com/docs/reference/cli-ip-allow#remove) | Remove IP addresses from the IP allowlist | | [reset](https://neon.com/docs/reference/cli-ip-allow#reset) | Reset the IP allowlist | ### list This subcommand allows you to list addresses in the IP allowlist. #### Usage ```bash neon ip-allow list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `list` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Examples ```bash neon ip-allow list --project-id cold-grass-40154007 ``` List the IP allowlist with the `--output` format set to `json`: ```bash neon ip-allow list --project-id cold-grass-40154007 --output json ``` ### add This subcommand allows you to add IP addresses to the IP allowlist for your Neon project. #### Usage ```bash neon ip-allow add [ips ...] [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `add` subcommand supports these options: | Option | Description | Type | Required | | ------------------ | ------------------------------------------------------------------------------------------------------------------ | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--protected-only` | If true, the list will be applied only to protected branches. Use `--protected-only false` to remove this setting. | string | | #### Example ```bash neon ip-allow add 192.0.2.3 --project-id cold-grass-40154007 ``` ### remove This subcommand allows you to remove IP addresses from the IP allowlist for your project. #### Usage ```bash neon ip-allow remove [ips ...] [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `remove` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon ip-allow remove 192.0.2.3 --project-id cold-grass-40154007 ``` ### reset This subcommand allows you to reset the list of IP addresses. You can reset to different IP addresses. If you specify no addresses, currently defined IP addresses are removed. #### Usage ```bash neon ip-allow reset [ips ...] [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `reset` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon ip-allow reset 192.0.2.1 --project-id cold-grass-40154007 ``` --- # Source: https://neon.com/llms/reference-cli-me.txt # Neon CLI commands — me > The Neon CLI commands document outlines the usage and functionality of the 'me' command, which allows users to manage their personal account settings and information within the Neon platform. ## Source - [Neon CLI commands — me HTML](https://neon.com/docs/reference/cli-me): The original HTML version of this documentation ## Before you begin - Before running the `me` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `me` command This command shows information about the current Neon CLI user. ### Usage ```bash neon me ``` ### Options Only [global options](https://neon.com/docs/reference/neon-cli#global-options) apply. ### Examples ```bash neon me ┌────────────────┬──────────────────────────┬─────────────┬────────────────┐ │ Login │ Email │ Name │ Projects Limit │ ├────────────────┼──────────────────────────┼─────────────┼────────────────┤ │ sally │ sally@example.com │ Sally Smith | 1 │ └────────────────┴──────────────────────────┴─────────────┴────────────────┘ ``` This example shows `neon me` with `--output json`, which provides additional data not shown with the default `table` output format. ```json neon me -o json { "active_seconds_limit": 360000, "billing_account": { "payment_source": { "type": "" }, "subscription_type": "free", "quota_reset_at_last": "2023-07-01T00:00:00Z", "email": "sally@example.com", "address_city": "", "address_country": "", "address_line1": "", "address_line2": "", "address_postal_code": "", "address_state": "" }, "auth_accounts": [ { "email": "sally@example.com", "image": "https://lh3.googleusercontent.com/a/AItbvml5rjEQkmt-h_abcdef-MwVtfpek7Aa_xk3cIS_=s96-c", "login": "sally", "name": "Sally Smith", "provider": "google" }, { "email": "sally@example.com", "image": "", "login": "sally", "name": "sally@example.com", "provider": "hasura" } ], "email": "sally@example.com", "id": "8a9f604e-d04e-1234-baf7-e78909a5d123", "image": "https://lh3.googleusercontent.com/a/AItbvml5rjEQkmt-h_abcdef-MwVtfpek7Aa_xk3cIS_=s96-c", "login": "sally", "name": "Sally Smith", "projects_limit": 10, "branches_limit": 10, "max_autoscaling_limit": 0.25, "plan": "free" } ``` --- # Source: https://neon.com/llms/reference-cli-operations.txt # Neon CLI commands — operations > The Neon CLI commands documentation details the operational commands available for managing Neon databases, including creating, deleting, and listing databases, as well as managing branches and endpoints. ## Source - [Neon CLI commands — operations HTML](https://neon.com/docs/reference/cli-operations): The original HTML version of this documentation ## Before you begin - Before running the `operations` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about operations in Neon, see [System operations](https://neon.com/docs/manage/operations). ## The `operations` command The `operations` command allows you to list operations for a Neon project. ### Usage ```bash neon operations [options] ``` | Subcommand | Description | | ------------- | --------------- | | [list](https://neon.com/docs/reference/cli-operations#list) | List operations | ### list This subcommand allows you to list operations. #### Usage ```bash neon operations list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `list` subcommand supports this option: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | #### Example ```bash neon operations list ┌──────────────────────────────────────┬────────────────────┬──────────┬──────────────────────┐ │ Id │ Action │ Status │ Created At │ ├──────────────────────────────────────┼────────────────────┼──────────┼──────────────────────┤ │ fce8642e-259e-4662-bdce-518880aee723 │ apply_config │ finished │ 2023-06-20T00:45:19Z │ ├──────────────────────────────────────┼────────────────────┼──────────┼──────────────────────┤ │ dc1dfb0c-b854-474b-be20-2ea1d2172563 │ apply_config │ finished │ 2023-06-20T00:43:17Z │ ├──────────────────────────────────────┼────────────────────┼──────────┼──────────────────────┤ │ 7a83e300-cf5f-4c1a-b9b5-569b6d6feab9 │ suspend_compute │ finished │ 2023-06-19T23:50:56Z │ └──────────────────────────────────────┴────────────────────┴──────────┴──────────────────────┘ ``` --- # Source: https://neon.com/llms/reference-cli-orgs.txt # Neon CLI commands — orgs > The Neon CLI commands documentation for organizations outlines how to manage organizational settings, including creating, listing, and deleting organizations within the Neon platform. ## Source - [Neon CLI commands — orgs HTML](https://neon.com/docs/reference/cli-orgs): The original HTML version of this documentation ## Before you begin - Before running the `orgs` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `orgs` command Use this command to manage the organizations you belong to within the Neon CLI. ### Usage ```bash neon orgs [options] ``` ### Sub-commands #### `list` This sub-command lists all organizations associated with the authenticated Neon CLI user. ```bash neon orgs list ``` ### Options Only [global options](https://neon.com/docs/reference/neon-cli#global-options) apply. ### Examples Here is the default output in table format. ```bash neon orgs list Organizations ┌────────────────────────┬──────────────────┐ │ Id │ Name │ ├────────────────────────┼──────────────────┤ │ org-xxxxxxxx-xxxxxxxx │ Example Org │ └────────────────────────┴──────────────────┘ ``` This next example shows `neon orgs list` with `--output json`, which also shows the `created_at` and `updated_at` timestamps not shown with the default `table` output format. ```json neon orgs list -o json [ { "id": "org-xxxxxxxx-xxxxxxxx", "name": "Example Org", "handle": "example-org-xxxxxxxx", "created_at": "2024-04-22T16:50:41Z", "updated_at": "2024-06-28T15:38:26Z" } ] ``` --- # Source: https://neon.com/llms/reference-cli-projects.txt # Neon CLI commands — projects > The document details Neon CLI commands related to project management, enabling users to create, manage, and configure Neon projects through the command line interface. ## Source - [Neon CLI commands — projects HTML](https://neon.com/docs/reference/cli-projects): The original HTML version of this documentation ## Before you begin - Before running the `projects` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about projects in Neon, see [Projects](https://neon.com/docs/manage/projects). ## The `projects` command The `projects` command allows you to list, create, update, delete, and retrieve information about Neon projects. ### Usage ```bash neon projects [options] ``` | Subcommand | Description | | ----------------- | ---------------- | | [list](https://neon.com/docs/reference/cli-projects#list) | List projects | | [create](https://neon.com/docs/reference/cli-projects#create) | Create a project | | [update](https://neon.com/docs/reference/cli-projects#update) | Update a project | | [delete](https://neon.com/docs/reference/cli-projects#delete) | Delete a project | | [get](https://neon.com/docs/reference/cli-projects#get) | Get a project | ### list This subcommand allows you to list projects that belong to your Neon account, as well as any projects that were shared with you. #### Usage ```bash neon projects list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `projects` subcommand supports this option: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--org-id` | List all projects belonging to the specified organization. | string | | #### Examples - List all projects belonging to your personal acccount ```bash neon projects list Projects ┌────────────────────────┬────────────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ crimson-voice-12345678 │ frontend │ aws-us-east-2 │ 2024-04-15T11:17:30Z │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ calm-thunder-12121212 │ backend │ aws-us-east-2 │ 2024-04-10T15:21:01Z │ ├────────────────────────┼────────────────────┼───────────────┼──────────────────────┤ │ nameless-hall-87654321 │ billing │ aws-us-east-2 │ 2024-04-10T14:35:17Z │ └────────────────────────┴────────────────────┴───────────────┴──────────────────────┘ Shared with you ┌───────────────────┬────────────────────┬──────────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼────────────────────┼──────────────────┼──────────────────────┤ │ noisy-fire-212121 │ API │ aws-eu-central-1 │ 2023-04-22T18:41:13Z │ └───────────────────┴────────────────────┴──────────────────┴──────────────────────┘ ``` - List all projects belonging to the specified organization. ```bash neon projects list --org-id org-xxxx-xxxx Projects ┌───────────────────────────┬───────────────────────────┬────────────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ bright-moon-12345678 │ dev-backend-api │ aws-us-east-2 │ 2024-07-26T11:43:37Z │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ silent-forest-87654321 │ test-integration-service │ aws-eu-central-1 │ 2024-05-30T22:14:49Z │ ├───────────────────────────┼───────────────────────────┼────────────────────┼──────────────────────┤ │ crystal-stream-23456789 │ staging-web-app │ aws-us-east-2 │ 2024-05-17T13:47:35Z │ └───────────────────────────┴───────────────────────────┴────────────────────┴──────────────────────┘ ``` ### create This subcommand allows you to create a Neon project. #### Usage ```bash neon projects create [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `create` subcommand supports these options: | Option | Description | Type | Required | | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | :------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name. | string | | | `--block-public-connections` | Blocks public internet connections. See [Private Networking](https://neon.com/docs/guides/neon-private-networking). | boolean | | | `--block-vpc-connections` | Blocks connections using VPC. See [Private Networking](https://neon.com/docs/guides/neon-private-networking). | boolean | | | `--hipaa` | Enable the project for HIPAA. See [HIPAA Compliance](https://neon.com/docs/security/hipaa). | boolean | | | `--name` | The project name. The project ID is used if a name is not specified. | string | | | `--region-id` | The region ID. Possible values: `aws-us-west-2`, `aws-ap-southeast-1`, `aws-ap-southeast-2`, `aws-eu-central-1`, `aws-us-east-1`, `aws-us-east-2`, `azure-eastus2`. Defaults to `aws-us-east-2` if not specified. | string | | | `--org-id` | The organization ID where you want this project to be created. If unspecified, your [default organization](https://neon.com/docs/reference/glossary#default-organization) will be used. | string | | | `--psql` | Connect to your new project's database via `psql` immediately on project creation. | boolean | | | `--database` | The database name. If not specified, the default database name will be used. | string | | | `--role` | The role name. If not specified, the default role name will be used. | string | | | `--set-context` | Set the current context to the new project. | boolean | | | `--cu` | The compute size for the default branch's primary compute. Could be a fixed size (e.g., "2") or a range delimited by a dash (e.g., "0.5-3"). | string | | **Note**: Neon projects created using the CLI use the default Postgres version, which is Postgres 17. To create a project with a different Postgres version, you can use the [Neon Console](https://neon.com/docs/manage/projects#create-a-project) or [Neon API](https://api-docs.neon.tech/reference/createproject). #### Examples - Create a project with a user-defined name in a specific region: ```bash neon projects create --name mynewproject --region-id aws-us-west-2 ┌───────────────────┬──────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼──────────────┼───────────────┼──────────────────────┤ │ muddy-wood-859533 │ mynewproject │ aws-us-west-2 │ 2023-07-09T17:04:29Z │ └───────────────────┴──────────────┴───────────────┴──────────────────────┘ ┌──────────────────────────────────────────────────────────────────────────────────────┐ │ Connection Uri │ ├──────────────────────────────────────────────────────────────────────────────────────┤ │ postgresql://[user]:[password]@[neon_hostname]/[dbname] │ └──────────────────────────────────────────────────────────────────────────────────────┘ ``` **Tip**: The Neon CLI provides a `neon connection-string` command you can use to extract a connection uri programmatically. See [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string). - Create a project with the `--output` format of the command set to `json`. This output format returns all of the project response data, whereas the default `table` output format (shown in the preceding example) is limited in the information it can display. ```bash neon projects create --output json ``` Details: Example output ```json { "project": { "data_storage_bytes_hour": 0, "data_transfer_bytes": 0, "written_data_bytes": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "cpu_used_sec": 0, "id": "long-wind-77910944", "platform_id": "aws", "region_id": "aws-us-east-2", "name": "long-wind-77910944", "provisioner": "k8s-pod", "default_endpoint_settings": { "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 1, "suspend_timeout_seconds": 0 }, "pg_version": 17, "proxy_host": "us-east-2.aws.neon.tech", "branch_logical_size_limit": 204800, "branch_logical_size_limit_bytes": 214748364800, "store_passwords": true, "creation_source": "neonctl", "history_retention_seconds": 604800, "created_at": "2023-08-04T16:16:45Z", "updated_at": "2023-08-04T16:16:45Z", "consumption_period_start": "0001-01-01T00:00:00Z", "consumption_period_end": "0001-01-01T00:00:00Z", "owner_id": "e56ad68e-7f2f-4d74-928c-9ea25d7e9864" }, "connection_uris": [ { "connection_uri": "postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require", "connection_parameters": { "database": "dbname", "password": "AbC123dEf", "role": "alex", "host": "ep-cool-darkness-123456.us-east-2.aws.neon.tech", "pooler_host": "ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech" } } ] } ``` - Create a project and connect to it with `psql`. ```bash neon project create --psql ``` - Create a project, connect to it with `psql`, and run an `.sql` file. ```bash neon project create --psql -- -f dump.sql ``` - Create a project, connect to it with `psql`, and run a query. ```bash neon project create --psql -- -c "SELECT version()" ``` - Create a project and set the Neon CLI project context. ``` neon project create --psql --set-context ``` ### update This subcommand allows you to update a Neon project. #### Usage ```bash neon projects update [options] ``` The `id` is the project ID, which you can obtain by listing your projects or from the **Settings** page in the Neon Console. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `update` subcommand supports this option: | Option | Description | Type | Required | | ---------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ------- | :------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--block-vpc-connections` | When set, connections using VPC endpoints are disallowed. Use `--block-vpc-connections=false` to set the value to false. | boolean | | | `--block-public-connections` | When set, connections from the public internet are disallowed. Use `--block-public-connections=false` to set the value to false. | boolean | | | `--hipaa` | Enable the project for HIPAA. See [HIPAA Compliance](https://neon.com/docs/security/hipaa). | boolean | | | `--cu` | The compute size for the default branch's primary compute. Could be a fixed size (e.g., "2") or a range delimited by a dash (e.g., "0.5-3"). | string | | | `--name` | The project name. The value cannot be empty. | string | ✓ | #### Examples - Update the project name: ```bash neon projects update muddy-wood-859533 --name dev_project_1 ┌───────────────────┬───────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼───────────────┼───────────────┼──────────────────────┤ │ muddy-wood-859533 │ dev_project_1 │ aws-us-west-2 │ 2023-07-09T17:04:29Z │ └───────────────────┴───────────────┴───────────────┴──────────────────────┘ ``` - Block connections from the public internet: This option is used with Neon's Private Networking feature to block access from the public internet. See [Private Networking — Restrict public internet access](https://neon.com/docs/guides/neon-private-networking#restrict-public-internet-access). You must specify the ID of you Neon project, as shown below. ```bash neon projects update orange-credit-12345678 --block-public-connections=true ``` ### delete This subcommand allows you to delete a Neon project. ```bash neon projects delete [options] ``` The `id` is the project ID, which you can obtain by listing your projects or from the **Settings** page in the Neon Console. #### Options Only [global options](https://neon.com/docs/reference/neon-cli#global-options) apply. #### Example ```bash neon projects delete muddy-wood-859533 ┌───────────────────┬───────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼───────────────┼───────────────┼──────────────────────┤ │ muddy-wood-859533 │ dev_project_1 │ aws-us-west-2 │ 2023-07-09T17:04:29Z │ └───────────────────┴───────────────┴───────────────┴──────────────────────┘ ``` Information about the deleted project is displayed. You can verify that the project was deleted by running `neon projects list`. ### get This subcommand allows you to retrieve details about a Neon project. #### Usage ```bash neon projects get [options] ``` The `id` is the project ID, which you can obtain by listing your projects or from the **Settings** page in the Neon Console. #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `delete` subcommand supports this option: | Option | Description | Type | Required | | ---------------- | ---------------------------------------------------------------------------------------------- | ------ | :------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name. | string | | #### Example ```bash neon projects get muddy-wood-859533 ┌───────────────────┬───────────────┬───────────────┬──────────────────────┐ │ Id │ Name │ Region Id │ Created At │ ├───────────────────┼───────────────┼───────────────┼──────────────────────┤ │ muddy-wood-859533 │ dev_project_1 │ aws-us-west-2 │ 2023-07-09T17:04:29Z │ └───────────────────┴───────────────┴───────────────┴──────────────────────┘ ``` --- # Source: https://neon.com/llms/reference-cli-quickstart.txt # Neon CLI Quickstart > The Neon CLI Quickstart document guides users through the installation and basic usage of the Neon Command Line Interface, enabling efficient management and interaction with Neon databases. ## Source - [Neon CLI Quickstart HTML](https://neon.com/docs/reference/cli-quickstart): The original HTML version of this documentation The Neon CLI is a command-line interface that lets you manage Neon directly from the terminal. This guide will help you quickly set up and start using the Neon CLI. ## Install the CLI Choose your platform and install the Neon CLI: Tab: macOS **Install with Homebrew** ```bash brew install neonctl ``` **Install via npm** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` Tab: Windows **Install via npm** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` Tab: Linux **Install via npm** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` Verify the installation by checking the CLI version: ```bash neon --version ``` For the latest version, refer to the [Neon CLI GitHub repository](https://github.com/neondatabase/neonctl) ## Authenticate Authenticate with your Neon account using one of these methods: **Web Authentication (recommended)** Run the command below to authenticate through your browser: ```bash neon auth ``` This will open a browser window where you can authorize the CLI to access your Neon account. **API Key Authentication** Alternatively, you can use a personal Neon API key. You can create one in the Neon Console. See [Create a personal API key](https://neon.com/docs/manage/api-keys#create-a-personal-api-key). ```bash neon projects list --api-key ``` To avoid entering your API key with each command, you can set it as an environment variable: ```bash export NEON_API_KEY= ``` For more about authenticating, see [Neon CLI commands — auth](https://neon.com/docs/reference/cli-auth). ## Set up your context file Context files allow you to use CLI commands without specifying your project ID or organization ID with each command. To set the context for your Neon project: ```bash neon set-context --project-id ``` To set the context for your both your Neon organization and a Neon project: ```bash neon set-context --org-id --project-id ``` **Info**: You can find your organization ID in the Neon Console by selecting your organization and navigating to **Settings**. You can find your Neon project ID by opening your project in the Neon Console and navigating to **Settings** > **General**. The `set-context` command creates a `.neon` file in your current directory with your project context. ```bash $ cat .neon { "projectId": "broad-surf-52155946", "orgId": "org-solid-base-83603457" }% ``` You can also create named context files for different organization and project contexts: ```bash neon set-context --org-id --project-id --context-file dev_project ``` To switch contexts, add the `--context-file` option to any command, specifying your context file: ```bash neon branches list --context-file Documents/dev_project ``` For more about the `set-context` command, see [Neon CLI commands — set-context](https://neon.com/docs/reference/cli-set-context). ## Enable shell completion Next, you can set up autocompletion to make using the CLI faster: Tab: Bash ```bash neon completion >> ~/.bashrc source ~/.bashrc ``` Tab: Zsh ```bash neon completion >> ~/.zshrc source ~/.zshrc ``` Now you can press **Tab** to complete Neon CLI commands and options. For further details, see [Neon CLI commands — completion](https://neon.com/docs/reference/cli-completion). ## Common operations Here are some common operations you can perform with the Neon CLI: ### List your projects ```bash neon projects list ``` If you want to list projects in your organization, don't forget to set your organization context or specify `--org-id `. Otherwise, you'll list the projects in your personal Neon account. For more about the `projects` command, see [Neon CLI commands — projects](https://neon.com/docs/reference/cli-projects). ### Create a branch ```bash neon branches create --name ``` Set your project context or specify `--project-id ` if you have more than one Neon project. For more about the `branches` command, see [Neon CLI commands — branches](https://neon.com/docs/reference/cli-branches). ### Get a connection string This will give you the connection string for the default branch in your project: ```bash neon connection-string ``` For a specific branch, specify the branch name: ```bash neon connection-string ``` There's lots more you can do with the `connection-string` command. See [Neon CLI commands — connection-string](https://neon.com/docs/reference/cli-connection-string). ## Next steps Now that you're set up with the Neon CLI, you can: - Create more Neon projects with `neon projects create` - Manage your branches with various `neon branches` commands such as `reset`, `restore`, `rename`, `schema-diff`, and more - Create and manage databases with `neon databases` commands - Create and manage roles with `neon roles` commands - View the full set of Neon CLI commands available to you with `neon --help` For more details on all available commands, see the [CLI Reference](https://neon.com/docs/reference/neon-cli). --- # Source: https://neon.com/llms/reference-cli-roles.txt # Neon CLI commands — roles > The Neon CLI commands documentation for roles details how to manage user roles within the Neon database environment, including creating, listing, and deleting roles through command-line instructions. ## Source - [Neon CLI commands — roles HTML](https://neon.com/docs/reference/cli-roles): The original HTML version of this documentation ## Before you begin - Before running the `roles` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). For information about roles in Neon, see [Manage roles](https://neon.com/docs/manage/roles). ## The `roles` command The `roles` command allows you to list, create, and delete roles in a Neon project. ### Usage ```bash neon roles [options] ``` | Subcommand | Description | | ----------------- | ------------- | | [list](https://neon.com/docs/reference/cli-roles#list) | List roles | | [create](https://neon.com/docs/reference/cli-roles#create) | Create a role | | [delete](https://neon.com/docs/reference/cli-roles#delete) | Delete a role | ### list This subcommand allows you to list roles. #### Usage ```bash neon roles list [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `list` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | If a branch ID or name is not provided, the command lists roles for the default branch of the project. #### Examples ```bash neon roles list ┌────────┬──────────────────────┐ │ Name │ Created At │ ├────────┼──────────────────────┤ │ daniel │ 2023-06-19T18:27:19Z │ └────────┴──────────────────────┘ ``` List roles with the `--output` format set to `json`: ```bash neon roles list --output json [ { "branch_id": "br-odd-frog-703504", "name": "daniel", "protected": false, "created_at": "2023-06-28T10:17:28Z", "updated_at": "2023-06-28T10:17:28Z" } ``` ### create This subcommand allows you to create a role. #### Usage ```bash neon roles create [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `create` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------- | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | | `--name` | The role name. Cannot exceed 63 bytes in length. | string | ✓ | | `--no-login` | Create a passwordless role that cannot login | boolean | | If a branch ID or name is not provided, the command creates a role in the default branch of the project. #### Example ```bash neon roles create --name sally ┌───────┬──────────────────────┐ │ Name │ Created At │ ├───────┼──────────────────────┤ │ sally │ 2023-06-20T00:43:17Z │ └───────┴──────────────────────┘ ``` ### delete This subcommand allows you to delete a role. #### Usage ```bash neon roles delete [options] ``` #### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `delete` subcommand supports these options: | Option | Description | Type | Required | | ---------------- | --------------------------------------------------------------------------------------------- | ------ | :-------------------------------------------------: | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | Project ID | string | Only if your Neon account has more than one project | | `--branch` | Branch ID or name | string | | If a branch ID or name is not provided, the command assumes the role resides in the default branch of the project. #### Example ```bash neon roles delete sally ┌───────┬──────────────────────┐ │ Name │ Created At │ ├───────┼──────────────────────┤ │ sally │ 2023-06-20T00:43:17Z │ └───────┴──────────────────────┘ ``` --- # Source: https://neon.com/llms/reference-cli-set-context.txt # Neon CLI commands — set-context > The Neon CLI commands documentation for "set-context" explains how to configure and switch between different project contexts within the Neon environment, facilitating efficient project management and workflow organization. ## Source - [Neon CLI commands — set-context HTML](https://neon.com/docs/reference/cli-set-context): The original HTML version of this documentation ## Before you begin - Before running the `set-context` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/neon-cli#install-the-neon-cli). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `set-context` command This command sets a background context for your CLI sessions, letting you perform project or branch-specific actions without having to specify the project id in every command. Using the `context-file` parameter, you can save the context to a file of your choice. If you don't specify a file, a default `.neon` file is saved to the current directory. You can switch contexts by providing different files. The context remains in place until you reset to a new context or remove the `context-file`. ### Usage #### set-context (hidden file) ```bash neon set-context [option] ``` #### set-context to context-file ```bash neon set-context [option] --context-file ``` #### set-context during project creation You can also set context for a new project during project creation. ```bash neon projects create --name --set-context ``` ### Options The `set-context` command requires you set at least one of these options: | Option | Description | Type | Required | | ---------------- | ------------------ | ------ | :--------------------------------------------------------------------------------------------------: | | `--project-id` | Project ID | string | Sets the identified project as the context until you reset or remove context-file | | `--org-id` | Organization ID | string | Sets the organization context, which allows you to perform actions in the context of an organization | | `--context-file` | Path and file name | string | Creates a file that holds organization-id, project-id, and branch context | [Global options](https://neon.com/docs/reference/neon-cli#global-options) are also supported. ## Examples of setting and using a context Here are some examples of setting contexts to specific projects, then using them in an example command. ### Using the default file Set the context to the default `.neon` file: ```bash neon set-context --project-id patient-frost-50125040 --org-id org-bright-sky-12345678 ``` List all branches for this project using `branches list`. There's no need to include `--project-id` or `--org-id`, even if you belong to multiple organizations or have multiple projects: ```bash neon branches list ``` The results show details for all branches in the `patient-frost-50125040` project within the `org-bright-sky-12345678` organization: ```bash ┌──────────────────────────┬─────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├──────────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-raspy-meadow-26349337 │ development │ false │ 2023-11-28T19:19:11Z │ 2023-12-01T00:18:21Z │ ├──────────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-curly-bar-82389180 │ main │ true │ 2023-10-23T12:49:41Z │ 2023-12-01T00:18:21Z │ └──────────────────────────┴─────────────┴─────────┴──────────────────────┴──────────────────────┘ ``` ### Using a named `context-file` Set the context to the `context-file` of your choice: ```bash neon set-context --project-id plain-waterfall-84865553 --context-file Documents/MyContext ``` List all branches using the `branches list` command. No need to specify the project since the context file provides it. ```bash neon branches list --context-file Documents/MyContext ``` The results show details for all branches in the `plain-waterfall-84865553` project: ```bash ┌─────────────────────────────┬─────────────┬─────────┬──────────────────────┬──────────────────────┐ │ Id │ Name │ Default │ Created At │ Updated At │ ├─────────────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-soft-base-86343042 │ development │ false │ 2023-11-21T18:41:47Z │ 2023-12-01T00:00:14Z │ ├─────────────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-young-bush-89857627 │ main │ true │ 2023-11-21T18:00:10Z │ 2023-12-01T03:33:53Z │ ├─────────────────────────────┼─────────────┼─────────┼──────────────────────┼──────────────────────┤ │ br-billowing-union-41102466 │ staging │ false │ 2023-11-21T18:44:22Z │ 2023-12-01T08:32:40Z │ └─────────────────────────────┴─────────────┴─────────┴──────────────────────┴────────────────────── ``` **Note**: These two `branches list` commands demonstrate the use of different contexts in the same account. The default `.neon` context is set to `patient-frost-50125040` while the named `context-file` is set to `plain-waterfall-84865553`. These contexts operate independently. You can set as many `context-files` as you'd like, using unique names or in different directories, depending on your needs. ### Setting context when creating a new project Let's say you want to create a new project called `MyLatest`. You can automatically set the project ID at the same time as you create the project. ```bash neon projects create --name MyLatest --set-context ``` This creates a hidden `.neon` file by default with the following context: ```json { "projectId": "quiet-water-76237589" } ``` You can now use any command that would normally require an additional `--project-id` parameter and the command will default to this context. ## Reset or remove context To reset or clear the current context, you have two options: 1. Run the `set-context` command with no options: ```bash neon set-context ``` 2. Delete the `.neon` file (or your custom `--context-file`): ```bash rm .neon # Or for a custom context file: rm your_context_file ``` **Note**: Neon does not save any confidential information to the context file (for example, auth tokens). You can safely commit this file to your repository or share with others. --- # Source: https://neon.com/llms/reference-cli-vpc.txt # Neon CLI commands — vpc > The Neon CLI commands documentation for VPC outlines how to manage Virtual Private Clouds within the Neon environment, detailing command usage for creating, listing, and deleting VPCs. ## Source - [Neon CLI commands — vpc HTML](https://neon.com/docs/reference/cli-vpc): The original HTML version of this documentation ## Before you begin - Before running a `vpc` command, ensure that you have [installed the Neon CLI](https://neon.com/docs/reference/cli-install). - If you have not authenticated with the [neon auth](https://neon.com/docs/reference/cli-auth) command, running a Neon CLI command automatically launches the Neon CLI browser authentication process. Alternatively, you can specify a Neon API key using the `--api-key` option when running a command. See [Connect](https://neon.com/docs/reference/neon-cli#connect). ## The `vpc` command You can use the `vpc` CLI command to manage [Private Networking](https://neon.com/docs/guides/neon-private-networking) configurations in Neon. The `vpc` command includes subcommands for managing VPC endpoints and project-level VPC endpoint restrictions. | Subcommand | Description | | :--------------------------------------- | :--------------------------------------------- | | [endpoint](https://neon.com/docs/reference/cli-vpc#the-vpc-endpoint-subcommand) | Manage VPC endpoints | | [project](https://neon.com/docs/reference/cli-vpc#the-vpc-project-subcommand) | Manage project-level VPC endpoint restrictions | ## The `vpc endpoint` subcommand The `vpc endpoint` subcommand lets you to list, assign, remove, and get the status of VPC endpoints for a Neon organization. ### Usage | Subcommand | Description | | :------------ | :------------------------------------------------------------------------------------------------------------------------------------ | | `list` | List configured VPC endpoints for the Neon organization. | | `assign ` | Add or update a VPC endpoint in the Neon organization. The ID is the VPC endpoint ID. Aliases for this command are `add` and `update` | | `remove ` | Remove a VPC endpoint from the Neon organization. The ID is the VPC endpoint ID. A removed VPC endpoint cannot be added back. | | `status ` | Get the status of a VPC endpoint for the Neon organization. The ID is the VPC endpoint ID. | ### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `vpc endpoint` subcommand supports these options: | Option | Description | Type | Required | | :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----- | :-------------------------------------------------------------------------------------------------------------------------------- | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--org-id` | Organization ID | string | Only if the user has more than one organization. If not specified, and the user has only one organization, that `org_id` is used. | | `--region-id` | The region ID. Possible values: `aws-us-west-2`, `aws-ap-southeast-1`, `aws-ap-southeast-2`, `aws-eu-central-1`, `aws-us-east-2`, `aws-us-east-1`, `azure-eastus2` | string | yes | ### Examples - **List VPC endpoints** Retrieve a list of all configured VPC endpoints for a specific Neon organization. ```bash neon vpc endpoint list --org-id org-bold-bonus-12345678 ``` - **Assign a VPC endpoint** Add or update a VPC endpoint for a specific Neon organization and region. ```bash neon vpc endpoint assign vpce-1234567890abcdef0 --org-id org-bold-bonus-12345678 --region-id aws-us-east-1 ``` After assigning a VPC endpoint to a Neon organization, client connections will be accepted from the corresponding VPC for all projects in the Neon organization, unless restricted. Aliases for this command are `add` and `update`. - **Remove a VPC endpoint** Delete an existing VPC endpoint from a specific Neon organization. ```bash neon vpc endpoint remove vpce-1234567890abcdef0 --org-id org-bold-bonus-12345678 ``` **Note**: A removed VPC endpoint cannot be added back to the Neon organization. - **Get the status of a VPC endpoint** Check the status of a specific VPC endpoint in a Neon organization. ```bash neon vpc endpoint status vpce-1234567890abcdef0 --org-id org-bold-bonus-12345678 ``` ## The `vpc project` subcommand The `vpc project` subcommand lets you list, configure, or remove VPC endpoint restrictions to prevent access to specific projects in your Neon organization. ### Usage | Subcommand | Description | | :-------------- | :------------------------------------------------------------------------------------------------------------- | | `list` | List all VPC endpoint restrictions for a specific project. | | `restrict ` | Configure or update a VPC endpoint restriction for a project. The ID is the VPC endpoint ID. [Alias: `update`] | | `remove ` | Remove a VPC endpoint restriction from a project. The ID is the VPC endpoint ID. | ### Options In addition to the Neon CLI [global options](https://neon.com/docs/reference/neon-cli#global-options), the `vpc project` subcommand supports these options: | Option | Description | Type | Required | | :--------------- | :-------------------------------------------------------------------------------------------- | :----- | :------- | | `--context-file` | [Context file](https://neon.com/docs/reference/cli-set-context#using-a-named-context-file) path and file name | string | | | `--project-id` | The Project ID. | string | yes | ### Examples - **List project-level VPC endpoint restrictions** List all VPC endpoint restrictions for the specified Neon project. ```bash neon vpc project list --project-id orange-credit-12345678 ``` - **Restrict connections to a specific VPC** Configure or update a VPC endpoint restriction for a Neon project. When a VPC endpoint ID is assigned as a restriction, the specified project only accepts connections from the specified VPC. ```bash neon vpc project restrict vpce-1234567890abcdef0 --project-id orange-credit-12345678 ``` - **Remove a VPC endpoint restriction** Remove a VPC endpoint restriction from a specific Neon project. ```bash neon vpc project remove vpce-1234567890abcdef0 --project-id orange-credit-12345678 ``` --- # Source: https://neon.com/llms/reference-compatibility.txt # Postgres compatibility > The document outlines Neon's compatibility with PostgreSQL, detailing supported features, extensions, and any limitations or differences to ensure seamless integration and functionality for users transitioning from PostgreSQL to Neon. ## Source - [Postgres compatibility HTML](https://neon.com/docs/reference/compatibility): The original HTML version of this documentation **Neon is Postgres**. However, as a managed Postgres service, there are some differences you should be aware of. ## Postgres versions Neon supports Postgres 14, 15, 16, 17, and 18 (preview), as per the [Neon version support policy](https://neon.com/docs/postgresql/postgres-version-policy). You can select the Postgres version you want to use when creating a Neon project. For information about creating a Neon project, See [Manage projects](https://neon.com/docs/manage/projects). Minor Postgres point releases are rolled out by Neon after extensive validation as part of regular platform maintenance. ## Postgres extensions Neon supports numerous Postgres extensions, and we regularly add support for more. For the extensions that Neon supports, see [Postgres Extensions](https://neon.com/docs/extensions/pg-extensions). To request support for additional extensions, please reach out to us on our [Discord Server](https://discord.gg/92vNTzKDGp). Please keep in mind that privilege requirements, local file system access, and functionality that is incompatible with Neon features such as Autoscaling and Scale to Zero may prevent Neon from being able to offer support for certain extensions. ## Roles and permissions Neon is a managed Postgres service, so you cannot access the host operating system, and you can't connect using the Postgres `superuser` account. In place of the Postgres superuser role, Neon provides a `neon_superuser` role. Roles created in the Neon Console, CLI, or API, including the default role created with a Neon project, are granted membership in the `neon_superuser` role. For information about the privileges associated with this role, see [The neon_superuser role](https://neon.com/docs/manage/roles#the-neonsuperuser-role). Roles created in Neon with SQL syntax, from a command-line tool like `psql` or the [Neon SQL Editor](https://neon.com/docs/connect/query-with-psql-editor), have the same privileges as newly created roles in a standalone Postgres installation. These roles are not granted membership in the `neon_superuser` role. You must grant these roles the privileges you want them to have. For more information, see [Manage roles with SQL](https://neon.com/docs/manage/roles#manage-roles-with-sql). Neon roles cannot install Postgres extensions other than those supported by Neon. ## Postgres parameter settings The following table shows parameter settings that are set explicitly for your Neon Postgres instance. These values may differ from standard Postgres defaults, and a few settings differ based on your Neon compute size. **Note**: Because Neon is a managed Postgres service, Postgres parameters are not user-configurable outside of a [session, database, or role context](https://neon.com/docs/reference/compatibility#configuring-postgres-parameters-for-a-session-database-or-role). If you are a Neon [Scale plan](https://neon.com/docs/introduction/plans) user and require a different Postgres instance-level setting, you can contact [Neon Support](https://neon.com/docs/introduction/support) to see if the desired setting can be supported. Please keep in mind that it may not be possible to support some parameters due to platform limitations and contraints. | Parameter | Value | Note | | ------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `client_connection_check_interval` | 60000 | | | `dynamic_shared_memory_type` | mmap | | | `effective_io_concurrency` | 20 | | | `effective_cache_size ` | | Set based on the [Local File Cache (LFC)](https://neon.com/docs/reference/glossary#local-file-cache) size of your maximum Neon compute size | | `fsync` | off | Neon syncs data to the Neon Storage Engine to store your data safely and reliably | | `hot_standby` | off | | | `idle_in_transaction_session_timeout` | 300000 | | | `listen_addresses` | '\*' | | | `log_connections` | on | | | `log_disconnections` | on | | | `log_min_error_statement` | panic | | | `log_temp_files` | 1048576 | | | `maintenance_work_mem` | 65536 | The value differs by compute size. See [below](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). | | `max_connections` | 112 | The value differs by compute size. See [below](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). | | `max_parallel_workers` | 8 | | | `max_replication_flush_lag` | 10240 | | | `max_replication_slots` | 10 | | | `max_replication_write_lag` | 500 | | | `max_wal_senders` | 10 | | | `max_wal_size` | 1024 | | | `max_worker_processes` | 26 | The value differs by compute size. See [below](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). | | `password_encryption` | scram-sha-256 | | | `restart_after_crash` | off | | | `shared_buffers` | 128MB | Neon uses a [Local File Cache (LFC)](https://neon.com/docs/extensions/neon#what-is-the-local-file-cache) in addition to `shared_buffers` to extend cache memory to 75% of your compute's RAM. The value differs by compute size. See [below](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). | | `superuser_reserved_connections` | 4 | | | `synchronous_standby_names` | 'walproposer' | | | `wal_level` | replica | Support for `wal_level=logical` is coming soon. See [logical replication](https://neon.com/docs/introduction/logical-replication). | | `wal_log_hints` | off | | | `wal_sender_timeout` | 10000 | | ### Parameter settings that differ by compute size Of the parameter settings listed above, the `max_connections`, `maintenance_work_mem`, `shared_buffers`, `max_worker_processes`, and `effective_cache_size` differ by your compute size—defined in [Compute Units (CU)](https://neon.com/docs/reference/glossary#compute-unit-cu)—or by your autoscaling configuration, which has a minimum and maximum compute size. To understand how values are set, see the formulas below. - The formula for `max_connections` is: ```go compute_size = min(max_compute_size, 8 * min_compute_size) max_connections = max(100, min(4000, 450.5 * compute_size)) ``` For example, if you have a fixed compute size of 4 CU, that size is both your `max_compute_size` and `min_compute_size`. Inputting that value into the formula gives you a `max_connections` setting of 1802. For an autoscaling configuration with a `min_compute_size` of 0.25 CU and a `max_compute_size` of 2 CU, the `max_connections` setting would be 901. **Note**: It's important to note that `max_connections` does not scale dynamically in an autoscaling configuration. It's a static setting determined by your minimum and maximum compute size. You can also check your `max_connections` setting in the Neon Console. Go to **Branches**, select your branch, then go to the **Compute** tab and select **Edit**. Your `max_connections` setting is the "direct connections" value. You can adjust the compute configuration to see how it impacts the number of direct connections. _You can use connection pooling in Neon to increase the number of supported connections. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling)._ - The `maintenance_work_mem` value is set according to your minimum compute size RAM. The formula is: ```go maintenance_work_mem = max(min_compute_size RAM in bytes * 1024/63,963,136, 65,536) ``` However, you can increase the setting for the current session; for example: ```sql SET maintenance_work_mem='10 GB'; ``` If you do increase `maintenance_work_mem`, your setting should not exceed 60 percent of your compute's available RAM. | Compute Units (CU) | vCPU | RAM | maintenance_work_mem | | :----------------- | :--- | :----- | :------------------- | | 0.25 | 0.25 | 1 GB | 64 MB | | 0.50 | 0.50 | 2 GB | 64 MB | | 1 | 1 | 4 GB | 67 MB | | 2 | 2 | 8 GB | 134 MB | | 3 | 3 | 12 GB | 201 MB | | 4 | 4 | 16 GB | 268 MB | | 5 | 5 | 20 GB | 335 MB | | 6 | 6 | 24 GB | 402 MB | | 7 | 7 | 28 GB | 470 MB | | 8 | 8 | 32 GB | 537 MB | | 9 | 9 | 36 GB | 604 MB | | 10 | 10 | 40 GB | 671 MB | | 11 | 11 | 44 GB | 738 MB | | 12 | 12 | 48 GB | 805 MB | | 13 | 13 | 52 GB | 872 MB | | 14 | 14 | 56 GB | 939 MB | | 15 | 15 | 60 GB | 1007 MB | | 16 | 16 | 64 GB | 1074 MB | | 18 | 18 | 72 GB | 1208 MB | | 20 | 20 | 80 GB | 1342 MB | | 22 | 22 | 88 GB | 1476 MB | | 24 | 24 | 96 GB | 1610 MB | | 26 | 26 | 104 GB | 1744 MB | | 28 | 28 | 112 GB | 1878 MB | | 30 | 30 | 120 GB | 2012 MB | | 32 | 32 | 128 GB | 2146 MB | | 34 | 34 | 136 GB | 2280 MB | | 36 | 36 | 144 GB | 2414 MB | | 38 | 38 | 152 GB | 2548 MB | | 40 | 40 | 160 GB | 2682 MB | | 42 | 42 | 168 GB | 2816 MB | | 44 | 44 | 176 GB | 2950 MB | | 46 | 46 | 184 GB | 3084 MB | | 48 | 48 | 192 GB | 3218 MB | | 50 | 50 | 200 GB | 3352 MB | | 52 | 52 | 208 GB | 3486 MB | | 54 | 54 | 216 GB | 3620 MB | | 56 | 56 | 224 GB | 3754 MB | - The formula for `max_worker_processes` is: ```go max_worker_processes := 12 + floor(2 * max_compute_size) ``` For example, if your `max_compute_size` is 4 CU, your `max_worker_processes` setting would be 20. - The formula for `shared_buffers` is: ```go backends = 1 + max_connections + max_worker_processes shared_buffers_mb = max(128, (1023 + backends * 256) / 1024) ``` - The `effective_cache_size` parameter is set based on the [Local File Cache (LFC)](https://neon.com/docs/reference/glossary#local-file-cache) size of your maximum Neon compute size. This helps the Postgres query planner make smarter decisions, which can improve query performance. For details on LFC size by compute size, see the table in [How to size your compute](https://neon.com/docs/manage/computes#how-to-size-your-compute). ### Configuring Postgres parameters for a session, database, or role Neon permits configuring parameters that have a `user` context, meaning that these parameters can be set for a session, database, or role. You can identify Postgres parameters with a `user` context by running the following query: ```sql SELECT name FROM pg_settings WHERE context = 'user'; ``` To set a parameter for a specific session, use a [SET](https://www.postgresql.org/docs/current/sql-set.html) command. For example, the `maintenance_work_mem` parameter supports a `user` context, which lets you set it for the current session with a `SET` command: ```sql SET maintenance_work_mem='1 GB'; ``` To set parameters for a database or role: ```sql ALTER DATABASE neondb SET maintenance_work_mem='1 GB'; ``` ```sql ALTER USER neondb_owner SET maintenance_work_mem='1 GB'; ``` ## Tablespaces Neon does not support PostgreSQL [tablespaces](https://www.postgresql.org/docs/current/manage-ag-tablespaces.html). Attempting to create a tablespace with the `CREATE TABLESPACE` command will result in an error. This is due to Neon's managed cloud architecture, which does not permit direct file system access for custom storage locations. If you have existing applications or scripts that use tablespaces for organizing database objects across different storage devices, you'll need to remove or modify these references when migrating to Neon. ## Postgres logs Postgres logs can be accessed through the [Datadog](https://neon.com/docs/guides/datadog) or [OpenTelemetry](https://neon.com/docs/guides/opentelemetry) integration on the Scale plan. The integration forwards logs including error messages, database connection events, system notifications, and general PostgreSQL logs. For other plans or if you need specific log information for troubleshooting purposes, please contact [Neon Support](https://neon.com/docs/introduction/support). ## Unlogged tables Unlogged tables are tables that do not write to the Postgres write-ahead log (WAL). In Noen, these tables are stored on compute local storage and are not persisted across compute restarts or when a compute scales to zero. This is unlike standard Postgres, where unlogged tables are only truncated in the event of abnormal process termination. Additionally, unlogged tables are limited by compute local disk space. Computes allocate 20 GiB of local disk space or 15 GiB x the maximum compute size (whichever is highest) for temporary files used by Postgres. ## Temporary tables Temporary tables are tied to a session (or optionally a transaction). They exist only for the lifetime of the session or transaction and are automatically dropped when it ends. Like unlogged tables, they are stored on compute local storage and limited by compute local disk space. ## Memory SQL queries and index builds can generate large volumes of data that may not fit in memory. In Neon, the size of your compute determines the amount of memory that is available. For information about compute size and available memory, see [How to size your compute](https://neon.com/docs/manage/endpoints#how-to-size-your-compute). ## Session context The Neon cloud service automatically closes idle connections after a period of inactivity, as described in [Compute lifecycle](https://neon.com/docs/conceptual-guides/compute-lifecycle/). When connections are closed, anything that exists within a session context is forgotten and must be recreated before being used again. For example, parameters set for a specific session, in-memory statistics, temporary tables, prepared statements, advisory locks, and notifications and listeners defined using [NOTIFY](https://www.postgresql.org/docs/current/sql-notify.html)/[LISTEN](https://www.postgresql.org/docs/current/sql-listen.html) commands only exist for the duration of the current session and are lost when the session ends. To avoid losing session-level contexts in Neon, you can disable Neon's [Scale to Zero](https://neon.com/docs/guides/scale-to-zero-guide) feature, which is possible on any of Neon's paid plans. However, disabling scale to zero also means that your compute will run 24/7. You can't disable scale to zero on Neon's Free plan, where your compute always suspends after 5 minutes of inactivity. ## Statistics collection Statistics collected by the Postgres [cumulative statistics system](https://www.postgresql.org/docs/current/monitoring-stats.html) are not saved when a Neon compute (where Postgres runs) is suspended due to inactivity or restarted. For information about the lifecycle of a Neon compute, see [Compute lifecycle](https://neon.com/docs/conceptual-guides/compute-lifecycle/). For information about configuring Neon's scale to zero behavior, see [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero). ## Database encoding Neon supports UTF8 encoding (Unicode, 8-bit variable-width encoding). This is the most widely used and recommended encoding for Postgres. To view the encoding and collation for your database, you can run the following query: ```sql SELECT pg_database.datname AS database_name, pg_encoding_to_char(pg_database.encoding) AS encoding, pg_database.datcollate AS collation, pg_database.datctype AS ctype FROM pg_database WHERE pg_database.datname = 'your_database_name'; ``` You can also issue this command from [psql](https://neon.com/docs/connect/query-with-psql-editor) or the Neon SQL Editor: ```bash \l ``` **Note**: In Postgres, you cannot change a database's encoding or collation after it has been created. ## Collation support A collation is an SQL schema object that maps an SQL name to locales provided by libraries installed in the operating system. A collation has a provider that specifies which library supplies the locale data. For example, a common standard provider, `libc`, uses locales provided by the operating system C library. By default, Neon uses the `C.UTF-8` collation. `C.UTF-8` supports the full range of UTF-8 encoded characters. Another provider supported by Neon is `icu`, which uses the external [ICU](https://icu.unicode.org/) library. In Neon, support for standard `libc` locales is limited compared to what you might find in a locally installed Postgres instance where there's typically a wider range of locales provided by libraries installed on your operating system. For this reason, Neon provides a full series of [predefined icu locales](https://www.postgresql.org/docs/current/collation.html#COLLATION-MANAGING-PREDEFINED-ICU) in case you require locale-specific sorting or case conversions. To view all of the predefined locales available to you, use the query `SELECT * FROM pg_collation`, or the command `\dOS+` from the [Neon SQL Editor](https://neon.com/docs/connect/query-with-psql-editor) or an SQL client like [psql](https://neon.com/docs/connect/query-with-psql-editor). To create a database with a predefined `icu` locale, you can issue a query similar to this one with your preferred locale: ```sql CREATE DATABASE my_arabic_db LOCALE_PROVIDER icu icu_locale 'ar-x-icu' template template0; ``` To specify the locale for individual columns, you can use this syntax: ```sql CREATE TABLE my_ru_table ( id serial PRIMARY KEY, russian_text_column text COLLATE "ru-x-icu", description text ); ``` ICU also supports creating custom collations. For more information, see [ICU Custom Collations](https://www.postgresql.org/docs/current/collation.html#ICU-CUSTOM-COLLATIONS). For more about collations in Postgres, see [Collation Support](https://www.postgresql.org/docs/current/collation.html#COLLATION). ## track_commit_timestamp parameter The `track_commit_timestamp` Postgres parameter is currently not supported in Neon due to platform constraints. --- # Source: https://neon.com/llms/reference-feeds.txt # Neon RSS feeds > The Neon RSS feeds documentation outlines how users can subscribe to and manage RSS feeds for updates on Neon's database services, enabling efficient monitoring of changes and new features. ## Source - [Neon RSS feeds HTML](https://neon.com/docs/reference/feeds): The original HTML version of this documentation Stay updated with the latest information and announcements from Neon by subscribing to our RSS feeds. You can monitor the Neon Changelog, and blog posts, and Neon status updates through your preferred RSS reader or [Slack channel](https://neon.com/docs/reference/feeds#subscribe-to-feeds-in-slack). ## Changelog Keep track of new features, improvements, and fixes by subscribing to the [Neon Changelog](https://neon.com/docs/changelog) RSS feed. ```bash https://neon.com/docs/changelog/rss.xml ``` ## Blog Stay informed on the latest articles and news by following the [Neon Blog](https://neon.com/blog) RSS feed. ```bash https://neon.com/blog/rss.xml ``` ## Community Guides Get the latest tips, tutorials, and best practices by subscribing to the [Neon Community Guides](https://neon.com/guides) RSS feed. ```bash https://neon.com/guides/rss.xml ``` ## Status Monitor the operational status of Neon across different regions by subscribing to the appropriate [Neon Status](https://neonstatus.com/) RSS feed. - **AWS US East (N. Virginia)** ```bash https://neonstatus.com/aws-us-east-n-virginia/feed.rss ``` - **AWS US East (Ohio)** ```bash https://neonstatus.com/aws-us-east-ohio/feed.rss ``` - **AWS US West (Oregon)** ```bash https://neonstatus.com/aws-us-west-oregon/feed.rss ``` - **AWS Europe (Frankfurt)** ```bash https://neonstatus.com/aws-europe-frankfurt/feed.rss ``` - **AWS Asia Pacific (Singapore)** ```bash https://neonstatus.com/aws-asia-pacific-singapore/feed.rss ``` - **AWS Asia Pacific (Sydney)** ```bash https://neonstatus.com/aws-asia-pacific-sydney/feed.rss ``` ## Subscribe to feeds in Slack To receive updates in Slack, enter the `/feed subscribe` command with the desired RSS feed into your Slack channel: ```bash /feed subscribe https://neon.com/docs/changelog/rss.xml ``` ## Remove feeds from Slack To remove feeds from Slack, enter the `/feed list` command and note the feed ID number. Enter `/feed remove [ID number]` to remove the feed. --- # Source: https://neon.com/llms/reference-glossary.txt # Glossary > The "Glossary" document defines key terms and concepts relevant to Neon's database platform, aiding users in understanding specific terminology used within Neon's technical documentation. ## Source - [Glossary HTML](https://neon.com/docs/reference/glossary): The original HTML version of this documentation ## access token See [Token](https://neon.com/docs/reference/glossary#token). ## active hours A usage metric that tracks the amount of time a compute is active, rather than idle when suspended due to inactivity. The time that your compute is idle is not counted toward compute usage. Also see [Compute hours](https://neon.com/docs/reference/glossary#compute-hours). ## Activity Monitor A process that monitors a Neon compute for activity. During periods of inactivity, the Activity Monitor gracefully places the compute into an idle state to save energy and resources. The Activity Monitor closes idle connections after 5 minutes of inactivity. When a connection is made to an idle compute, the Activity Monitor reactivates the compute. ## Admin An [Organizations](https://neon.com/docs/reference/glossary#organization) role in Neon with full access to all projects, permissions, invitations, and billing for an organization. Admins can manage members, assign roles, set permissions, and delete the organization. ## API See [Neon API](https://neon.com/docs/reference/glossary#neon-api). ## API Key A unique identifier used to authenticate a user or a calling program to an API. An API key is required to authenticate to the Neon API. For more information, see [Manage API keys](https://neon.com/docs/manage/api-keys). ## apply_config A Neon Control Plane operation that applies a new configuration to a Neon object or resource. For example, creating, deleting, or updating Postgres users and databases initiates this operation. See [System operations](https://neon.com/docs/manage/operations) for more information. ## Archive storage Cost-efficient storage where Neon archives inactive branches after a defined threshold. For Neon projects created in AWS regions, inactive branches are archived in Amazon S3 storage. For Neon projects created in Azure regions, branches are archived in Azure Blob storage. ## autoscaler-agent A control mechanism in the Neon autoscaling system that collects metrics from VMs, makes scaling decisions, and performs checks and requests to implement those decisions. ## Autoscaling A feature that automatically adjusts the allocation of vCPU and RAM for compute within specified minimum and maximum compute size boundaries, optimizing for performance and cost-efficiency. For information about how Neon implements the _Autoscaling_ feature, see [Autoscaling](https://neon.com/docs/introduction/autoscaling). ## Availability Checker A periodic load generated by the Control Plane to determine if a compute can start and read and write data. The Availability Checker queries a system database without accessing user data. You can monitor these checks, how long they take, and how often they occur, on the **Systems operations** tab on the **Monitoring** page in the Neon Console. ## backpressure A mechanism that manages the lag between the Pageserver and compute node or the Pageserver and Write-Ahead Log (WAL) service. If the WAL service runs ahead of the Pageserver, the time to serve page requests increases, which could result in increased query times or timeout errors. The backpressure mechanism manages lag using a stop-and-wait backend throttling strategy. ## backup branch A branch created by a [instant restore](https://neon.com/docs/reference/glossary#branch-restore) operation. When you restore a branch from a particular point in time, the current branch is saved as a backup branch. ## branch An isolated copy of data, similar to a Git branch. Data includes databases, schemas, tables, records, indexes, roles — everything that comprises data in a Postgres instance. Just as a Git branch allows developers to work on separate features or fixes without impacting their main line of code, a Neon branch enables users to modify a copy of their data in isolation from their main line of data. This approach facilitates parallel database development, testing, and other features, similar to Git's code branching system. Each Neon project is created with two branches by default: - **production** - The default branch. This main line of data is referred to as [root branch](https://neon.com/docs/reference/glossary#root-branch). - **development** - A child branch of production. A branch created from the root branch or another branch is a [copy-on-write](https://neon.com/docs/reference/glossary#copy-on-write) clone. You can create a branch from the current or past state of another branch. A branch created from the current state of another branch includes the data that existed on that branch at the time of branch creation. A branch created from a past state of another branch includes the data that existed in the past state. Connecting to a database on a branch requires connecting via a compute attached to the branch. See [Connect to a branch](https://neon.com/docs/manage/branches#connect-to-a-branch). ## branch archiving The automatic archiving of inactive branches in cost-efficient archive storage after a defined threshold. For more, see [Branch archiving](https://neon.com/docs/guides/branch-archiving). ## Branching A Neon feature that allows you to create an isolated copy of your data for parallel database development, testing, and other purposes, similar to branching in Git. See [Branch](https://neon.com/docs/reference/glossary#branch). ### Business plan A [legacy paid plan](https://neon.com/docs/introduction/legacy-plans) designed for mid-to-large enterprises that require higher compute capacity and advanced security and compliance features. See [Neon plans](https://neon.com/docs/introduction/plans). ## check_availability A Neon Control Plane operation that checks the availability of data in a branch and that a compute can start on a branch. Branches without a compute are not checked. This operation, performed by the availability checker, is a periodic load generated by the Control Plane. You can monitor these checks, how long they take, and how often they occur, on the **Systems operations** tab on the **Monitoring** page in the Neon Console. ## child branch A [branch](https://neon.com/docs/reference/glossary#branch) in a Neon project that is created from a [root branch](https://neon.com/docs/reference/glossary#root-branch) or another branch. The source branch is considered the parent branch. ## CI/CD Continuous integration and continuous delivery or continuous deployment. ## CIDR notation CIDR (Classless Inter-Domain Routing) notation is a method used to define ranges of IP addresses in network management. It is presented in the format of an IP address, followed by a slash, and then a number (e.g., 203.0.113.0/24). The number after the slash represents the size of the address block, providing a compact way to specify a large range of IP addresses. In Neon's IP Allow feature, CIDR notation allows for efficiently specifying a block of IP addresses, especially useful for larger networks or subnets. This can be advantageous when managing access to branches with numerous potential users, such as in a large development team or a company-wide network. For related information, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## cgroups Control groups, a Linux kernel feature that allows the organization, prioritization, and accounting of system resources for groups of processes. ## Collaborator A role in Neon with limited access to specific projects shared with them. Shared projects appear under the "Shared with you" section in their personal account. ## Compute A service that provides virtualized computing resources, equipped with an operating system, a specified number of virtual CPUs (vCPUs), and a defined amount of RAM. It provides the processing power and resources for running applications. In the context of Neon, a compute runs Postgres and includes supporting components and extensions. A [compute endpoint](https://neon.com/docs/reference/glossary#compute-endpoint) is the access point for connecting to a Neon compute. Neon creates a primary read-write compute for the project's default branch. Neon supports both read-write and [read replica](https://neon.com/docs/introduction/read-replicas) computes. A branch can have a single primary (read-write) compute but supports multiple read replica computes. The compute hostname is required to connect to a Neon Postgres database from a client or application. ## compute endpoint The network access point for connecting to a [Neon compute](https://neon.com/docs/reference/glossary#compute). In Neon, a compute endpoint is represented by a hostname, such as `ep-aged-math-668285.us-east-2.aws.neon.tech`, which directs traffic to the appropriate Neon compute. Additional attributes further define a compute endpoint, including `project_id`, `region_id`, `branch_id`, and `type`. These attributes specify the associated Neon project, branch, cloud service region, and whether the endpoint is read-write or read-only. For additional endpoint attributes, refer to the [Neon API](https://api-docs.neon.tech/reference/createprojectendpoint). ## compute size The Compute Units (CU) that are allocated to a Neon compute. A Neon compute can have anywhere from .25 to 56 CU. The number of units determines the processing capacity of the compute. ## Compute Unit (CU) A unit that measures the processing power or "size" of a Neon compute. A Compute Unit (CU) includes vCPU and RAM. A Neon compute can have anywhere from .25 to 56 CUs. See [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration). ## compute hours A usage metric for tracking compute usage. 1 compute hour is equal to 1 [active hour](https://neon.com/docs/reference/glossary#active-hours) for a compute with 1 vCPU. If you have a compute with .25 vCPU, as you would on the Neon Free plan, it would require 4 _active hours_ to use 1 compute hour. On the other hand, if you have a compute with 4 vCPU, it would only take 15 minutes to use 1 compute hour. To calculate compute hour usage, you would use the following formula: ``` compute hours = compute size * active hours ``` Also see [Active hours](https://neon.com/docs/reference/glossary#active-hours). ## connection pooling A method of creating a pool of connections and caching those connections for reuse. Neon supports `PgBouncer` in `transaction mode` for connection pooling. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). ## connection string A string containing details for connecting to a Neon Postgres database. The details include a user name (role), compute hostname, and database name; for example: ```bash postgresql://alex:AbC123dEf@ep-cool-darkness-123456.c-2.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` The compute hostname includes an `endpoint_id` (`ep-cool-darkness-123456`), a region slug (`c-2.us-east-2`), the cloud platform (`aws`), and Neon domain (`neon.tech`). Connection strings in some AWS regions may include a cell identifier (e.g., `c-2`) in the region slug to support scalability in Neon's high-demand regions. Connection strings for a Neon databases can be obtained by clicking the **Connect** button on your **Project Dashboard**. For information about connecting to Neon, see [Connect from any application](https://neon.com/docs/connect/connect-from-any-app). ## console See [Neon Console](https://neon.com/docs/reference/glossary#neon-console). ## Control Plane The part of the Neon architecture that manages cloud storage and compute resources. ## copy-on-write A technique used to copy data efficiently. Neon uses the copy-on-write technique when creating [branches](https://neon.com/docs/reference/glossary#branch). When a branch is created, data is marked as shared rather than physically duplicated. Parent and child branches refer to the same physical data resource. Data is only physically copied when a write occurs. The affected portion of data is copied and the write is performed on the copied data. ## create_branch A Neon Control Plane operation that creates a branch in a Neon project. For related information, see Manage branches. See [System operations](https://neon.com/docs/manage/operations) for more information. ## create_timeline Sets up storage and creates the default branch when a Neon [project](https://neon.com/docs/reference/glossary#project) is created. See [System operations](https://neon.com/docs/manage/operations) for more information. ## data-at-rest encryption A method of storing inactive data that converts plaintext data into a coded form or cipher text, making it unreadable without an encryption key. Neon stores inactive data in [NVMe SSD volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html#nvme-ssd-volumes). The data on NVMe instance storage is encrypted using an XTS-AES-256 block cipher implemented in a hardware module on the instance. ## Data transfer A usage metric that measures the total volume of data transferred out of Neon (egress) during a billing period. Egress also includes data transferred from Neon via Postgres logical replication to any destination, including Neon itself. Free plan projects are limited to 5 GB per month. ## Database A named collection of database objects. A Neon project is created with a database that resides in the default `public` schema. If you do not specify a name for the database when creating a Neon project, it's created with the name `neondb`. A Neon project can contain multiple databases. Users cannot manipulate system databases, such as the `postgres`, `template0`, or `template1` databases. ## database branching See [Branching](https://neon.com/docs/reference/glossary#branching). ## database fleet A collection of database instances, typically managed as a single entity. ## decoder plugin Utilized in PostgreSQL replication architecture to decode WAL entries into a format understandable by the subscriber. The `pgoutput` decoder plugin is the default decoder, with alternatives like `wal2json` for specific use cases. Neon supports `pgoutput` and `wal2json`. See [Postgres logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts). ## dedicated resources Resources including compute and storage dedicated to a single Neon account. ## delete_tenant A Neon Control Plane operation that deletes stored data when a Neon project is deleted. See [System operations](https://neon.com/docs/manage/operations) for more information. ## Endpoint ID A string that identifies a Neon compute endpoint. Neon Endpoint IDs are generated Heroku-like memorable random names, similar to `ep-calm-flower-a5b75h79`. These names are always prefixed by `ep` for "endpoint". You can find your Endpoint ID by navigating to your project in the Neon Console, selecting **Branches** from the sidebar, and clicking on a branch. The **Endpoint ID** is shown in the table under the **Computes** heading. ## Egress The data transferred out of the Neon service to an external destination. See [Data transfer](https://neon.com/docs/reference/glossary#data-transfer). ## Enterprise plan A legacy paid plan offered by Neon. See [Neon plans](https://neon.com/docs/introduction/plans). ## Free plan See [Neon plans](https://neon.com/docs/introduction/plans) for details about the Free plan. ## GB-month In Neon, **GB-month** is a unit of measure representing the storage of 1 gigabyte (GB) of data for one month. A gigabyte is defined as 10^9 bytes (1,000,000,000 bytes). Storage usage is measured periodically and accumulated over the billing period. At the start of each billing period, GB-month usage resets to zero. GB-month usage reflects both the amount of storage used and how long it was used. For example, storing 10 GB for an entire month results in **10 GB-months**, while storing 10 GB for half a month results in **5 GB-months**. Deleting data will reduce the rate at which GB-month usage increases from that point forward, but it does not decrease the GB-month usage accrued up to that point. ## History The history of data changes for all branches in your Neon project. A history is maintained to support _instant restore_. ## Instant restore Restoration of data to a state that existed at an earlier time. Neon retains a history of changes in the form of Write-Ahead-Log (WAL) records, which allows you to restore data to an earlier point. For more information about this feature, see [Branching — Instant restore](https://neon.com/docs/introduction/branch-restore). ## IP Allow A Neon feature used to control which IP addresses can access databases in a Neon project, often utilized to restrict public internet access. See [IP Allow](https://neon.com/docs/introduction/ip-allow). ## IP allowlist An IP allowlist is a security measure used in network and database management. It specifies a list of IP addresses that are permitted to access a certain resource. Any IP address not on the list is automatically blocked, ensuring that only authorized users or systems can gain access. For more information, see [Configure the IP Allow list](https://neon.com/docs/manage/projects#configure-tip-allow). ## Kubernetes An open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. ## Kubernetes cluster A set of interconnected nodes that run containerized applications and services using Kubernetes, an open-source orchestration platform for automating deployment, scaling, and management of containerized applications. The cluster consists of at least one control plane node, which manages the overall state of the cluster, and multiple worker nodes, where the actual application containers are deployed and executed. The worker nodes communicate with the control plane node to ensure the desired state of the applications is maintained. ## Kubernetes node A worker machine in a Kubernetes cluster, which runs containerized applications. ## Kubernetes scheduler A component of Kubernetes that assigns newly created pods to nodes based on resource availability and other constraints. ## KVM Kernel-based Virtual Machine, a virtualization infrastructure built into the Linux kernel that allows it to act as a hypervisor for virtual machines. ## Launch plan A Neon plan designed for startups and growing teams that need more resources, features, and flexibility. It offers usage-based pricing, starting at $5/month. See [Neon plans](https://neon.com/docs/introduction/plans). ## live migration A feature provided by some hypervisors, such as QEMU, that allows the transfer of a running virtual machine from one host to another with minimal interruption. ## Local File Cache The Local File Cache (LFC) is a layer of caching that stores frequently accessed data from the storage layer in the local memory of the compute. This cache helps to reduce latency and improve query performance by minimizing the need to fetch data from the storage layer repeatedly. The LFC acts as an add-on or extension of Postgres [shared buffers](https://neon.com/docs/reference/glossary#shared-buffers). In Neon the `shared_buffers` parameter [scales with compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). The LFC extends cache memory up to 75 % of your compute's RAM. ### logical data size For a Postgres database, it is the size of the database, including all tables, indexes, views, and stored procedures. In Neon, a branch can have multiple databases. The logical data size for a branch is therefore equal to the total logical size of all databases on the branch. ## logical replication A method of replicating data between databases or platforms, focusing on replicating transactional changes (like `INSERT`, `UPDATE`, `DELETE`) rather than the entire database, enabling selective replication of specific tables or rows. See [Logical replication](https://neon.com/docs/guides/logical-replication-guide). ## LSN Log Sequence Number. A byte offset to a location in the [WAL stream](https://neon.com/docs/reference/glossary#wal-stream). The Neon branching feature supports creating branches with data up to a specified LSN. ## LRU policy Least Recently Used policy, an algorithm for cache replacement that evicts the least recently accessed items first. ## Monitoring Dashboard A feature of the Neon Console that provides several graphs to help you monitor system and database metrics, updated in real time based on your usage data. ## Member An [Organizations](https://neon.com/docs/reference/glossary#organization) role in Neon with access to all projects within the organization. Members cannot manage billing, members, or permissions. They must be invited to the organization by an [Admin](https://neon.com/docs/reference/glossary#admin). ## Neon A serverless Postgres platform designed to help developers build reliable and scalable applications faster. We separate compute and storage to offer modern developer features such as autoscaling, branching, instant restore, and more. For more information, see [Why Neon?](https://neon.com/docs/introduction). ## Neon API The Neon RESTful Application Programming Interface. Any operation performed in the Neon Console can also be performed using the Neon API. ## Neon Console A browser-based graphical interface for managing Neon projects and resources. ## Neon Free plan A Neon plan for which there are no usage charges. For information about the Neon Free plan and associated limits, see [Neon plans](https://neon.com/docs/introduction/plans). ## Neon Proxy A component of the Neon platform that acts as an intermediary between connecting clients and compute nodes where Postgres runs. The Neon Proxy is responsible for tasks such as connection routing, authentication, and metrics collection. From a security perspective, it helps protect the integrity of the Neon platform through a combination of authentication, authorization, and other security measures. ## Neon user The user account that registers and authenticates with Neon using an email, GitHub, Google, or partner account. After authenticating, a Neon user account can create and manage projects, branches, users, databases, and other project resources. ## Neon Org A named organization entity in Neon that groups multiple Neon users under a shared account. See [Organization](https://neon.com/docs/reference/glossary#organization) for details. ## NeonVM A QEMU-based tool used by Neon to create and manage VMs within a Kubernetes cluster, allowing for the allocation and deallocation of vCPU and RAM. For more information, refer to the NeonVM source in the [neondatabase/autoscaling](https://github.com/neondatabase/autoscaling/tree/main/neonvm) repository. ## non-default branch Any branch in a Neon project that is not designated as the [default branch](https://neon.com/docs/reference/glossary#default-branch). For more information, see [Non-default branch](https://neon.com/docs/manage/branches#non-default-branch). ## Organization A feature in Neon that enables teams to collaborate on projects under a shared account. Organizations provide centralized management for billing, user roles, and project collaboration. Members can be invited to join, and roles such as Admin, Member, and Collaborator determine access and permissions within the organization. Admins oversee all aspects of the organization, including managing members, permissions, billing, and projects. Members have access to all organizational projects but cannot manage billing or members. Collaborators have limited access to specific projects shared with them and do not have access to the organization dashboard. You get one Org with a Free plan account. Additional organizations are available on paid plans and can be created from scratch or by converting a personal account into an organization. For more, see [Organizations](https://neon.com/docs/manage/organizations). ## Page An 8KB unit of data, which is the smallest unit that Postgres uses for storing relations and indexes on disk. In Neon, a page is also the smallest unit of data that resides on a Pageserver. For information about Postgres page format, see [Database Page Layout](https://www.postgresql.org/docs/14/storage-page-layout.html), in the _PostgreSQL Documentation_. ## Paid plan A paid Neon service plan. See [Neon plans](https://neon.com/docs/introduction/plans). ## Pageserver A Neon architecture component that reads WAL records from Safekeepers to identify modified pages. The Pageserver accumulates and indexes incoming WAL records in memory and writes them to disk in batches. Each batch is written to an immutable file that is never modified after creation. Using these files, the Pageserver can quickly reconstruct any version of a page dating back to the defined restore window. Neon retains a history for all branches. The Pageserver uploads immutable files to cloud storage, which is the final, highly durable destination for data. After a file is successfully uploaded to cloud storage, the corresponding WAL records can be removed from the Safekeepers. ## passwordless authentication The ability to authenticate without providing a password. Neon's [Passwordless auth](https://neon.com/docs/reference/glossary#passwordless-auth) feature supports passwordless authentication. ## peak usage Peak usage is the highest amount of a resource (like storage or projects) you've used during the current billing period. If you go over your plan's limit, extra charges are added in set increments. You're charged for these extra units from the date you went over the limit, with the charges prorated for the rest of the month. ## point-in-time restore (PITR) A database recovery capability that allows restoring data to a specific moment in the past using Write-Ahead Log (WAL) records. In Neon, this capability is implemented through the instant restore feature, which performs point-in-time restores with near-zero delay. ## pooled connection string A pooled connection string in Neon includes a `-pooler` option, which directs your connection to a pooled connection port at the Neon Proxy. This is an example of a pooled connection: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` A pooled connection can support a high number of concurrent users and is recommended for use with serverless and edge functions. For more information, see [Connection pooling](https://neon.com/docs/connect/connection-pooling). You can obtain a pooled connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Select the **Connection pooling** option to add the `-pooler` option to the connection string. For further instructions, see [How to use connection pooling](https://neon.com/docs/connect/connection-pooling#how-to-use-connection-pooling). ## PostgreSQL An open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance. ## Postgres role A Postgres role is an entity that can own database objects and has privileges to perform database actions. A Postgres role named `neondb_owner` is created with each Neon project by default. This role owns the ready-to-use `neondb` database, also created by default with each new Neon project. This role and any additional role created in the Neon Console, API, or CLI is assigned the [neon_superuser](https://neon.com/docs/manage/roles#the-neonsuperuser-role) role, which allows creating databases, roles, and reading and writing data in all tables, views, sequences. Roles created with SQL are created with the same basic [public schema privileges](https://neon.com/docs/manage/database-access#public-schema-privileges) granted to newly created roles in a standalone Postgres installation. These users are not assigned the `neon_superuser` role. They must be selectively granted permissions for each database object. For more information, see [Manage database access](https://neon.com/docs/manage/database-access). Older projects may have a `web-access` system role, used by the [SQL Editor](https://neon.com/docs/reference/glossary#sql-editor) and Neon's [Passwordless auth](https://neon.com/docs/reference/glossary#passwordless-auth). The `web-access` role is system-managed. It cannot be modified, removed, or used in other authentication scenarios. ## Private Networking A feature in Neon that allows secure connections to Neon databases through AWS PrivateLink, bypassing the open internet. This ensures all data traffic remains within AWS's private network for enhanced security and compliance. See [Private Networking](https://neon.com/docs/guides/neon-private-networking). ## default branch A designation that is given to a [branch](https://neon.com/docs/reference/glossary#branch) in a Neon project. Each Neon project is initially created with a [root branch](https://neon.com/docs/reference/glossary#root-branch) called `production`, which carries the _default branch_ designation by default. The default branch serves two key purposes: - For users on paid plans, the compute associated with the default branch is exempt from the [concurrently active compute limit](https://neon.com/docs/reference/glossary#concurrently-active-compute-limit), ensuring that it is always available. - The [Neon-Managed Vercel integration](https://neon.com/docs/guides/neon-managed-vercel-integration) creates preview deployment branches from your Neon project's default branch. You can change your default branch, but a branch carrying the default branch designation cannot be deleted. For more information, see [default branch](https://neon.com/docs/manage/branches#default-branch). ## Project A collection of branches, databases, roles, and other project resources and settings. A project contains a primary [compute](https://neon.com/docs/reference/glossary#compute) that runs Postgres. It may also include [read replicas](https://neon.com/docs/reference/glossary#read-replica). A Neon account may have multiple projects. ## Project ID A string that identifies your Neon project. Neon Project IDs are generated Heroku-like memorable random names, similar to `cool-forest-86753099`. You can find your project ID by navigating to your project in the Neon Console and selecting **Settings** from the sidebar. The project ID is also visible in the Neon Console URL after navigating to a project: `https://console.neon.tech/app/projects/cool-forest-86753099` ## Project Collaboration A feature that lets you invite other Neon users to work on a project together. Note that organization members don't need to be added as collaborators since they automatically get access to all organization projects. See [Invite collaborators](https://neon.com/docs/manage/projects#invite-collaborators-to-a-project) for more information. ## Project storage The total volume of data stored in your Neon project. ## prorate Adjusting a payment or charge so it corresponds to the actual usage or time period involved, rather than charging a full amount. Neon prorates the cost for extra units of storage when you exceed your plan's allowance. For example, if you purchase an extra unit of storage halfway through the monthly billing period, you are only charged half the unit price. ## Proxy A Neon component that functions as a multitenant service that accepts and handles connections from clients that use the Postgres protocol. ## Protected Branches A feature in Neon you can use to designate a Neon branch as "protected", which enables a series of protections: - Protected branches cannot be deleted. - Protected branches cannot be [reset](https://neon.com/docs/manage/branches#reset-a-branch-from-parent). - Projects with protected branches cannot be deleted. - Computes associated with a protected branch cannot be deleted. - New passwords are automatically generated for Postgres roles on branches created from protected branches. [See below](https://neon.com/docs/reference/glossary#new-passwords-generated-for-postgres-roles-on-child-branches). - With additional configuration steps, you can apply IP Allow restrictions to protected branches only. See [below](https://neon.com/docs/reference/glossary#how-to-apply-ip-restrictions-to-protected-branches). - Protected branches are not [archived](https://neon.com/docs/guides/branch-archiving) due to inactivity. The protected branches feature is available on all Neon paid plans. Typically, the protected branch status is given to a branch or branches that hold production data or sensitive data. For information about how to configure a protected branch, refer to our [Protected branches guide](https://neon.com/docs/guides/protected-branches). ## Publisher In the context of logical replication, the publisher is the primary data source where changes occur. It's responsible for sending those changes to one or more subscribers. A Neon database can act as a publisher in a logical replication setup. See [Logical replication](https://neon.com/docs/guides/logical-replication-guide). ## QEMU A free and open-source emulator and virtualizer that performs hardware virtualization. ## RAM Random Access Memory, a type of computer memory used to store data that is being actively processed. ## read replica A read replica in Neon is a read-only compute that connects to the same underlying storage as the primary read-write compute but operates in read-only mode. It lets you offload read queries from your primary compute to improve performance and scalability, especially for analytical or reporting workloads. Read replica computes can be added to a branch or removed without affecting the primary compute. ## region The geographic location where Neon project resources are located. Neon supports creating projects in Amazon Web Services (AWS) and Azure regions. For information about regions supported by Neon, see [Regions](https://neon.com/docs/introduction/regions). ## replication slot On the publisher database in a logical replication setup, replication slots track the progress of replication to ensure no data in the WAL is purged before the subscriber has successfully replicated it, thus preventing data loss or inconsistency. See [Postgres logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts). ## resale Selling the Neon service as part of another service offering. ## root branch Each Neon project is created with a root branch, which cannot be deleted and is set as the [default branch](https://neon.com/docs/reference/glossary#default-branch) for the project. A project created in the Neon Console has a root branch named `production`. A root branch has no parent branch. Neon also supports two other types of root branches that have no parent but _can_ be deleted: - [Backup branches](https://neon.com/docs/reference/glossary#backup-branch), created by instant restore operations on other root branches. - [Schema-only branches](https://neon.com/docs/reference/glossary#schema-only-branch). The number of root branches allowed in a project depends on your Neon plan. See [Branch types: Root branch](https://neon.com/docs/manage/branches#root-branch). ## Safekeeper A Neon architecture component responsible for the durability of database changes. Postgres streams WAL records to Safekeepers. A quorum algorithm based on Paxos ensures that when a transaction is committed, it is stored on a majority of Safekeepers and can be recovered if a node is lost. Safekeepers are deployed in different availability zones to ensure high availability and durability. ## Scale plan A Neon pricing plan designed for scaling production workloads. See [Neon plans](https://neon.com/docs/introduction/plans). ## Scale to Zero A Neon feature that suspends a compute after a specified period of inactivity (5 minutes by default) to minimize compute usage. When suspended, a compute is placed into an idle state. Otherwise, the compute is in an `Active` state. Users on paid plans can disable the _Scale to Zero_ feature for an "always-active" compute. For more information, see [Edit a compute](https://neon.com/docs/manage/endpoints#edit-a-compute). ## schema-only branch A branch that replicates only the database schema from a source branch, without copying any of the actual data. This feature is particularly valuable when working with sensitive information. Rather than creating branches that include confidential data, you can duplicate just the database structure and then populate it with your own data. Schema-only branches are [root branches](https://neon.com/docs/reference/glossary#root-branch), meaning they have no parent. As a root branch, each schema-only branch starts an independent line of data in a Neon project. See [Schema-only branches](https://neon.com/docs/guides/branching-schema-only). ## Schema Diff A Neon feature that lets you compare database schemas between different branches for better debugging, code review, and team collobration. See [Schema Diff](https://neon.com/docs/guides/schema-diff). ## Concurrently active compute limit This limit caps how many computes can run at the same time to prevent resource exhaustion. It protects against accidental surges, such as starting many endpoints at once. The default branch is exempt from this limit. The default limit is 20 concurrently active computes. When you exceed the limit, additional computes beyond the limit will remain suspended and you will see an error when attempting to connect to them. You can suspend other active computes and try again. Alternatively, if you encounter this error often, you can reach out to [Support](https://neon.com/docs/introduction/support) to request a `max_active_endpoints` limit increase. ## serverless A cloud-based development model that enables developing and running applications without having to manage servers. ## shared buffers A memory area in Postgres for caching blocks of data from storage (disk on standalone Postgres or Pageservers in Neon). This cache enhances the performance of database operations by reducing the need to access the slower storage for frequently accessed data. Neon uses a [Local File Cache (LFC)](https://neon.com/docs/reference/glossary#local-file-cache), which acts as an add-on or extension of shared buffers. In Neon the `shared_buffers` parameter [scales with compute size](https://neon.com/docs/reference/compatibility#parameter-settings-that-differ-by-compute-size). The LFC extends cache memory up to 75 % of your compute's RAM. For additional information about shared buffers in Postgres, see [Resource Consumption](https://www.postgresql.org/docs/current/runtime-config-resource.html), in the Postgres documentation. ## Snapshot A read-only, point-in-time copy of a root branch's complete state, including the schema and all data. A snapshot is created instantly with minimal performance impact. ## SNI Server Name Indication. A TLS protocol extension that allows a client or browser to indicate which hostname it wants to connect to at the beginning of a TLS handshake. ## SQL Editor A feature of the Neon Console that enables running queries on a Neon database. The SQL Editor also enables saving queries, viewing query history, and analyzing or explaining queries. ## start_compute A Neon Control Plane operation that starts a compute when there is an event or action that requires compute resources. For example, connecting to a suspended compute initiates this operation. See [System operations](https://neon.com/docs/manage/operations) for more information. For information about how Neon manages compute resources, see [Compute lifecycle](https://neon.com/docs/introduction/compute-lifecycle). ## Storage Where data is recorded and stored. Neon storage consists of Pageservers, which store hot data, and a cloud object store, such as Amazon S3, that stores cold data for cost optimization and durability. Also, a usage metric that tracks the total volume of data and [history](https://neon.com/docs/reference/glossary#history) stored in Neon. For more information, see [Storage](https://neon.com/docs/reference/glossary#storage). ## subscriber The database or platform receiving changes from the publisher in a logical replication setup. It applies changes received from the publisher to its own data set. Currently, a Neon database can only act as a publisher in a logical replication setup. See [Logical replication](https://neon.com/docs/guides/logical-replication-guide). ## subscription Represents the downstream side of logical replication, establishing a connection to the publisher and subscribing to one or more publications to receive updates. See [Postgres logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts). ## suspend_compute A Neon Control Plane operation that suspends a compute after a period of inactivity. See [System operations](https://neon.com/docs/manage/operations) for more information. For information about how Neon manages compute resources, see [Compute lifecycle](https://neon.com/docs/introduction/compute-lifecycle). ## technical preview An early version of a feature or changes released for testing and feedback purposes. ## tenant_attach A Neon Control Plane operation that attaches a Neon project to storage. For example, this operation occurs when you create a new Neon project. See [System operations](https://neon.com/docs/manage/operations) for more information. ## tenant_detach A Neon Control Plane operation that detaches a Neon project from storage. For example, this operation occurs after the project as been idle for 30 days. See [System operations](https://neon.com/docs/manage/operations) for more information. ## tenant_reattach A Neon Control Plane operation that reattaches a Neon project to storage. For example, this operation occurs when a detached Neon project receives a request. See [System operations](https://neon.com/docs/manage/operations) for more information. ## token An encrypted access token that enables you to authenticate with Neon using the Neon API. An access token is generated when creating a Neon API key. For more information, see [Manage API keys](https://neon.com/docs/manage/api-keys). ## unpooled connection string An unpooled connection string connects to your Neon database directly. It does not use [connection pooling](https://neon.com/docs/reference/glossary#connection-pooling), and it looks similar to this: ```text postgresql://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/dbname?sslmode=require&channel_binding=require ``` You can obtain an unpooled connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Ensure that the **Connection pooling** option is **not** selected. A direct connection is subject to the `max_connections` limit for your compute. For more information, see [How to size your compute](https://neon.com/docs/manage/endpoints#how-to-size-your-compute). ## Time Travel A Neon feature that lets you connect to any selected point in time within your restore window and run queries against that connection. See [Time Travel](https://neon.com/docs/guides/time-travel-assist). ## user See [Neon user](https://neon.com/docs/reference/glossary#neon-user) and [Postgres role](https://neon.com/docs/reference/glossary#postgresql-role). ## vm-monitor A program that runs inside the VM alongside Postgres, responsible for requesting more resources from the autoscaler-agent and validating proposed downscaling to ensure sufficient memory. ## vCPU Virtual CPU, a unit of processing power allocated to a virtual machine or compute. ## WAL See [Write-Ahead Logging](https://neon.com/docs/reference/glossary#write-ahead-logging-wal). ## WAL receiver In logical replication, on the subscriber side, the WAL receiver is a process that receives the replication stream (decoded WAL data) and applies these changes to the subscriber's database. See [Postgres logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts). ## WAL sender In logical replication, the WAL sender is a process on the publisher database that reads the WAL and sends relevant data to the subscriber. See [Postgres logical replication concepts](https://neon.com/docs/guides/logical-replication-concepts). ## WAL slice Write-ahead logs in a specific LSN range. ## WAL stream The stream of data written to the Write-Ahead Log (WAL) during transactional processing. ## working set A subset of frequently accessed or recently used data and indexes that ideally reside in memory (RAM) for quick access, allowing for better performance. See [how to size your compute](https://neon.com/docs/manage/endpoints#how-to-size-your-compute) to learn how to set your minimum compute to an adequate size to handle your working set. ## Write-Ahead Logging (WAL) A standard mechanism that ensures the durability of your data. Neon relies on WAL to separate storage and compute, and to support features such as branching and instant restore. In logical replication, the WAL records all changes to the data, serving as the source for data that needs to be replicated. ## Written data A usage metric that measures the total volume of data written from compute to storage within a given billing period, measured in gigabytes (GB). Writing data from compute to storage ensures the durability and integrity of your data. --- # Source: https://neon.com/llms/reference-metrics-logs.txt # Metrics and logs reference > The "Metrics and Logs Reference" document outlines the metrics and logging capabilities within Neon, detailing how to monitor and analyze system performance and operational data effectively. ## Source - [Metrics and logs reference HTML](https://neon.com/docs/reference/metrics-logs): The original HTML version of this documentation This page provides a comprehensive reference for all metrics and log fields that Neon exports to observability platforms through integrations like [Datadog](https://neon.com/docs/guides/datadog), [Grafana Cloud](https://neon.com/docs/guides/grafana-cloud), and [OpenTelemetry](https://neon.com/docs/guides/opentelemetry). ## Available metrics Neon makes the following metrics available for export to third parties through our observability integrations. All metrics include the following labels: - `project_id` - `endpoint_id` - `compute_id` - `job` Here's an example of the metric `db_total_size` with all labels: ```json neon_db_total_size{project_id="square-daffodil-12345678", endpoint_id="ep-aged-art-260862", compute_id="compute-shrill-blaze-b4hry7fg", job="sql-metrics"} 10485760 ``` **Note**: In Datadog, metric labels are referred to as `tags.` See [Getting Started with Tags](https://docs.datadoghq.com/getting_started/tagging/) in the Datadog Docs. | Name | Job | Description | | --------------------------------------------- | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | neon_connection_counts | sql-metrics | Total number of database connections. The `state` label indicates whether the connection is `active` (executing queries), `idle` (awaiting queries), or in a variety of other states derived from the [pg_stat_activity](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW) Postgres view. | | neon_db_total_size | sql-metrics | Total size of all databases in your project, measured in bytes. | | neon_lfc_approximate_working_set_size_windows | sql-metrics | Approximate [working set size](https://neon.com/docs/manage/endpoints#sizing-your-compute-based-on-the-working-set) in pages of 8192 bytes. The metric is tracked over time windows (5m, 15m, 1h) to gauge access patterns. Duration values: `duration="5m"`, `duration="15m"`, `duration="1h"`. | | neon_lfc_cache_size_limit | sql-metrics | The limit on the size of the Local File Cache (LFC), measured in bytes. | | neon_lfc_hits | sql-metrics | The number of times requested data was found in the LFC (cache hit). Higher cache hit rates indicate efficient memory use. | | neon_lfc_misses | sql-metrics | The number of times requested data was not found in the LFC (cache miss), forcing a read from slower storage. High miss rates may indicate insufficient compute size. | | neon_lfc_used | sql-metrics | The amount of space currently used in the LFC, measured in 1MB chunks. It reflects how much of the cache limit is being utilized. | | neon_lfc_writes | sql-metrics | The number of write operations to the LFC. | | neon_max_cluster_size | sql-metrics | The `neon.max_cluster_size` setting in MB. | | neon_pg_stats_userdb | sql-metrics | Aggregated metrics from the [pg_stat_database](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-VIEW) Postgres view. We collect stats from the oldest non-system databases based on their creation time, but not for all databases. Only the first X databases (sorted by creation time) are included. **datname**: The name of the database **kind**: The type of value being reported. One of the following: - **db_size**: The size of the database on disk, in bytes (`pg_database_size(datname)`) - **deadlocks**: The number of deadlocks detected - **inserted**: The number of rows inserted (`tup_inserted`) - **updated**: The number of rows updated (`tup_updated`) - **deleted**: The number of rows deleted (`tup_deleted`) | | neon_replication_delay_bytes | sql-metrics | The number of bytes between the last received LSN (`Log Sequence Number`) and the last replayed one. Large values indiciate replication lag. | | neon_replication_delay_seconds | sql-metrics | Time since the last `LSN` was replayed. | | host_cpu_seconds_total | compute-host-metrics | The number of CPU seconds accumulated in different operating modes (user, system, idle, etc.). | | host_load1 | compute-host-metrics | System load averaged over the last 1 minute. Example: for 0.25 vCPU, `host_load1` of `0.25` means full utilization, >0.25 indicates waiting processes. | | host_load5 | compute-host-metrics | System load averaged over the last 5 minutes. | | host_load15 | compute-host-metrics | System load averaged over the last 15 minutes. | | host_memory_active_bytes | compute-host-metrics | The number of bytes of active main memory. | | host_memory_available_bytes | compute-host-metrics | The number of bytes of main memory available. | | host_memory_buffers_bytes | compute-host-metrics | The number of bytes of main memory used by buffers. | | host_memory_cached_bytes | compute-host-metrics | The number of bytes of main memory used by cached blocks. | | host_memory_free_bytes | compute-host-metrics | The number of bytes of main memory not used. | | host_memory_shared_bytes | compute-host-metrics | The number of bytes of main memory shared between processes. | | host_memory_swap_free_bytes | compute-host-metrics | The number of free bytes of swap space. | | host_memory_swap_total_bytes | compute-host-metrics | The total number of bytes of swap space. | | host_memory_swap_used_bytes | compute-host-metrics | The number of used bytes of swap space. | | host_memory_swapped_in_bytes_total | compute-host-metrics | The number of bytes that have been swapped into main memory. | | host_memory_swapped_out_bytes_total | compute-host-metrics | The number of bytes that have been swapped out from main memory. | | host_memory_total_bytes | compute-host-metrics | The total number of bytes of main memory. | | host_memory_used_bytes | compute-host-metrics | The number of bytes of main memory used by programs or caches. | ## Postgres logs Neon can export Postgres logs to observability platforms, providing visibility into database activity, errors, and performance. These logs include: - Error messages and warnings - Connection events - System notifications ### Log fields and metadata Logs include the following labels and metadata for filtering and organization: - `project_id` - `endpoint_id` - Timestamp - Log level - And other standard PostgreSQL log fields **Note**: During the beta phase, you may see some Neon-specific system logs included. These will be filtered out before general availability (GA). ### Performance considerations Enabling log export may result in: - An increase in compute resource usage for log processing - Additional network egress for log transmission, billed on paid plans for usage over 100 GB - Associated costs based on log volume in your observability platform **Note**: Neon computes only send logs and metrics when they are active. If the [Scale to Zero](https://neon.com/docs/introduction/scale-to-zero) feature is enabled and a compute is suspended due to inactivity, no logs or metrics will be sent during the suspension. This may result in gaps in your data. Additionally, if you are setting up an integration for a project with an inactive compute, you'll need to activate the compute before it can send data. To activate it, simply run a query from the [Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor) or any connected client. ### Technical details Neon processes logs directly on each compute instance using [rsyslogd](https://www.rsyslog.com/doc/index.html), an industry-standard open source logging utility. This compute-level processing means that log collection contributes to your compute's resource usage. ## Integration guides For platform-specific setup instructions and examples, see: - [Datadog integration](https://neon.com/docs/guides/datadog) - Setup instructions, dashboard configuration, and Datadog-specific features - [Grafana Cloud integration](https://neon.com/docs/guides/grafana-cloud) - Native OTLP integration with automatic routing to Mimir, Loki, and Tempo - [OpenTelemetry integration](https://neon.com/docs/guides/opentelemetry) - OTLP configuration for any compatible observability platform --- # Source: https://neon.com/llms/reference-neon-cli.txt # Neon CLI > The Neon CLI documentation details command-line tools for managing Neon databases, enabling users to perform tasks such as creating, deleting, and listing databases and branches directly from the terminal. ## Source - [Neon CLI HTML](https://neon.com/docs/reference/neon-cli): The original HTML version of this documentation The Neon CLI is a command-line interface that lets you manage Neon directly from the terminal. This documentation references all commands and options available in the Neon CLI. 🚀 Get set up in just a few steps with the [CLI Quickstart](https://neon.com/docs/reference/cli-quickstart). ## Install Tab: macOS **Install with [Homebrew](https://formulae.brew.sh/formula/neonctl)** ```bash brew install neonctl ``` **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` Requires [Node.js 18.0](https://nodejs.org/en/download/) or higher. **Install with bun** ```bash bun install -g neonctl ``` **macOS binary** Download the binary. No installation required. ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-macos -o neonctl ``` Run the CLI from the download directory: ```bash neon [options] ``` Tab: Windows **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` Requires [Node.js 18.0](https://nodejs.org/en/download/) or higher. **Windows binary** Download the binary. No installation required. ```bash curl -sL -O https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-win.exe ``` Run the CLI from the download directory: ```bash neonctl-win.exe [options] ``` Tab: Linux **Install via [npm](https://www.npmjs.com/package/neonctl)** ```shell npm i -g neonctl ``` **Install with bun** ```bash bun install -g neonctl ``` **Linux binary** Download the x64 or ARM64 binary, depending on your processor type. No installation required. x64: ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-linux-x64 -o neonctl ``` ARM64: ```bash curl -sL https://github.com/neondatabase/neonctl/releases/latest/download/neonctl-linux-arm64 -o neonctl ``` Run the CLI from the download directory: ```bash neon [options] ``` For more about installing, upgrading, and connecting, see [Neon CLI — Install and connect](https://neon.com/docs/reference/cli-install). **Note** Use the Neon CLI without installing: You can run the Neon CLI without installing it using **npx** (Node Package eXecute) or the `bun` equivalent, **bunx**. For example: ```shell # npx npx neonctl # bunx bunx neonctl ``` ## Synopsis ```bash neon --help usage: neon [options] [aliases: neonctl] Commands: neon auth Authenticate [aliases: login] neon me Show current user neon orgs Manage organizations [aliases: org] neon projects Manage projects [aliases: project] neon ip-allow Manage IP Allow neon vpc Manage VPC endpoints and project VPC restrictions neon branches Manage branches [aliases: branch] neon databases Manage databases [aliases: database, db] neon roles Manage roles [aliases: role] neon operations Manage operations [aliases: operation] neon connection-string [branch] Get connection string [aliases: cs] neon set-context Set the current context neon init Initialize a new Neon project using your AI coding assistant neon completion generate completion script Global options: -o, --output Set output format [string] [choices: "json", "yaml", "table"] [default: "table"] --config-dir Path to config directory [string] [default: ""] --api-key API key [string] [default: ""] --analytics Manage analytics. Example: --no-analytics, --analytics false [boolean] [default: true] -v, --version Show version number [boolean] -h, --help Show help [boolean] Options: --context-file Context file [string] [default: (current-context-file)] ``` ## Commands | Command | Subcommands | Description | | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------- | | [auth](https://neon.com/docs/reference/cli-auth) | | Authenticate | | [me](https://neon.com/docs/reference/cli-me) | | Show current user | | [orgs](https://neon.com/docs/reference/cli-orgs) | `list` | Manage organizations | | [projects](https://neon.com/docs/reference/cli-projects) | `list`, `create`, `update`, `delete`, `get` | Manage projects | | [ip-allow](https://neon.com/docs/reference/cli-ip-allow) | `list`, `add`, `remove`, `reset` | Manage IP Allow | | [vpc](https://neon.com/docs/reference/cli-vpc) | `endpoint`, `project` | Manage VPC endpoints and project VPC restrictions | | [branches](https://neon.com/docs/reference/cli-branches) | `list`, `create`, `reset`, `restore`, `rename`, `schema-diff`, `set-default`, `add-compute`, `delete`, `get` | Manage branches | | [databases](https://neon.com/docs/reference/cli-databases) | `list`, `create`, `delete` | Manage databases | | [roles](https://neon.com/docs/reference/cli-roles) | `list`, `create`, `delete` | Manage roles | | [operations](https://neon.com/docs/reference/cli-operations) | `list` | Manage operations | | [connection-string](https://neon.com/docs/reference/cli-connection-string) | | Get connection string | | [set-context](https://neon.com/docs/reference/cli-set-context) | | Set context for session | | [init](https://neon.com/docs/reference/cli-init) | | Initialize a Neon project with AI assistant | | [completion](https://neon.com/docs/reference/cli-completion) | | Generate a completion script | ## Global options Global options are supported with any Neon CLI command. | Option | Description | Type | Default | | :-------------------------- | :---------------------------------------------------------- | :------ | :---------------------------------- | | [-o, --output](https://neon.com/docs/reference/neon-cli#output) | Set the Neon CLI output format (`json`, `yaml`, or `table`) | string | table | | [--config-dir](https://neon.com/docs/reference/neon-cli#config-dir) | Path to the Neon CLI configuration directory | string | `/home//.config/neonctl` | | [--api-key](https://neon.com/docs/reference/neon-cli#api-key) | Neon API key | string | `NEON_API_KEY` environment variable | | [--color](https://neon.com/docs/reference/neon-cli#color) | Colorize the output. Example: `--no-color`, `--color false` | boolean | true | | [--analytics](https://neon.com/docs/reference/neon-cli#analytics) | Manage analytics | boolean | true | | [-v, --version](https://neon.com/docs/reference/neon-cli#version) | Show the Neon CLI version number | boolean | - | | [-h, --help](https://neon.com/docs/reference/neon-cli#help) | Show the Neon CLI help | boolean | - | - `-o, --output` Sets the output format. Supported options are `json`, `yaml`, and `table`. The default is `table`. Table output may be limited. The `json` and `yaml` output formats show all data. ```bash neon me --output json ``` - `--config-dir` Specifies the path to the `neonctl` configuration directory. To view the default configuration directory containing you `credentials.json` file, run `neon --help`. The credentials file is created when you authenticate using the `neon auth` command. This option is only necessary if you move your `neonctl` configuration file to a location other than the default. ```bash neon projects list --config-dir /home//.config/neonctl ``` - `--api-key` Specifies your Neon API key. You can authenticate using a Neon API key when running a Neon CLI command instead of using `neon auth`. For information about obtaining an Neon API key, see [Create an API key](https://neon.com/docs/manage/api-keys#create-an-api-key). ```bash neon --api-key ``` To avoid including the `--api-key` option with each CLI command, you can export your API key to the `NEON_API_KEY` environment variable. ```bash export NEON_API_KEY= ``` The authentication flow for the Neon CLI follows this order: - If the `--api-key` option is provided, it takes precedence and is used for authentication. - If the `--api-key` option is not provided, the `NEON_API_KEY` environment variable is used if it is set. - If both `--api-key` option and `NEON_API_KEY` environment variable are not provided or set, the CLI falls back to the `credentials.json` file created by the `neon auth` command. - If the credentials file is not found, the Neon CLI initiates the `neon auth` web authentication process. - `--color` Colorize the output. This option is enabled by default, but you can disable it by specifying `--no-color` or `--color false`, which is useful when using Neon CLI commands in your automation pipelines. - `--analytics` Analytics are enabled by default to gather information about the CLI commands and options that are used by our customers. This data collection assists in offering support, and allows for a better understanding of typical usage patterns so that we can improve user experience. Neon does not collect user-defined data, such as project IDs or command payloads. To opt-out of analytics data collection, specify `--no-analytics` or `--analytics false`. - `-v, --version` Shows the Neon CLI version number. ```bash $ neon --version 1.15.0 ``` - `-h, --help` Shows the `neon` command-line help. You can view help for `neon`, a `neon` command, or a `neon` subcommand, as shown in the following examples: ```bash neon --help neon branches --help neon branches create --help ``` ## Options | Option | Description | Type | Default | | :------------------------------ | :-------------------------------- | :----- | :------------------- | | [--context-file](https://neon.com/docs/reference/neon-cli#context-file) | The context file for CLI sessions | string | current-context-file | - `--context-file` Sets a background context for your CLI sessions, letting you perform organization, project, or branch-specific actions without having to specify the relevant id in every command. For example, this command lists all branches using the `branches list` command. No need to specify the project since the context file provides it. ```bash neon branches list --context-file path/to/context_file_name ``` To define a context file, see [Neon CLI commands — set-context](https://neon.com/docs/reference/cli-set-context). ## GitHub repository The GitHub repository for the Neon CLI is found [here](https://github.com/neondatabase/neonctl). --- # Source: https://neon.com/llms/reference-neon-instagres.txt # Instagres > Instagres documentation outlines the setup and configuration process for deploying and managing Neon databases, detailing steps for creating, connecting, and scaling databases within the Neon platform. > formerly known as Neon Launchpad ## Source - [Instagres HTML](https://neon.com/docs/reference/instagres): The original HTML version of this documentation Instagres enables instant provisioning of a Postgres database without configuration or account creation. Built on Neon's serverless Postgres platform, it provides immediate database access for development and testing. Access it now at [neon.new](https://neon.new/). ## Core features The service provides the following capabilities: - Instant database provisioning with immediate connection string availability - Resource limits matching Neon's Free plan specifications - 72-hour database lifespan if not claimed - Option to claim databases with a unique claim ID and Neon account - Automatic database seeding with SQL scripts for schema and data initialization (via CLI or Vite plugin) ## Access methods ### Browser access 1. Navigate to [https://neon.new](https://neon.new/) 2. Select `Try in your browser`, which redirects to [https://neon.new/db](https://neon.new/db) 3. Receive an automatically generated connection string 4. Save the provided `Claim URL` to add this database to a Neon account later, or claim now ### Command-line interface Execute with your preferred package manager: Tab: npx ```bash npx get-db ``` Tab: yarn ```bash yarn dlx get-db ``` Tab: pnpm ```bash pnpm dlx get-db ``` Tab: bunx ```bash bunx get-db ``` Tab: deno ```bash deno run -A get-db ``` **CLI options:** | Option | Alias | Description | Default | | ------------------- | ----- | ------------------------------------- | -------------- | | `--yes` | `-y` | Skip prompts and use defaults | | | `--env ` | `-e` | Path to the .env file | `./.env` | | `--key ` | `-k` | Env var for connection string | `DATABASE_URL` | | `--prefix ` | `-p` | Prefix for generated public vars | `PUBLIC_` | | `--seed ` | `-s` | Path to SQL file to seed the database | not set | | `--help` | `-h` | Show help message | | **Examples:** ```bash # Basic usage: creates a new Neon database and writes credentials to .env npx get-db # Seed the database with a SQL file after creation npx get-db --seed ./init.sql # Use a custom .env file and environment variable key npx get-db --env ./my.env --key MY_DB_URL # Skip prompts and use defaults npx get-db --yes # Detects PUBLIC_INSTAGRES_CLAIM_URL (default) from your environment, # and opens the defined claim URL in your browser npx get-db claim ``` The CLI writes the connection string, claim URL, and expiration to the specified `.env` file and outputs them in the terminal. For example: ```txt # Claimable DB expires at: Sun, 05 Oct 2025 23:11:33 GMT # Claim it now to your account: https://neon.new/database/aefc1112-0419-323a-97d4-05254da94551 DATABASE_URL=postgresql://neondb_owner:npg_4zqVsO2sJeUS@ep-tiny-scene-bgmszqe1.c-2.eu-central-1.aws.neon.tech/neondb?channel_binding=require&sslmode=require DATABASE_URL_POOLER=postgresql://neondb_owner:npg_4zqVsO2sJeUS@ep-tiny-scene-bgmszqe1-pooler.c-2.eu-central-1.aws.neon.tech/neondb?channel_binding=require&sslmode=require PUBLIC_INSTAGRES_CLAIM_URL=https://neon.new/database/aefc1112-0419-323a-97d4-05254da94551 ``` For advanced SDK/API usage, see the [Neondb CLI package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/neondb). ### Integration with development tools Add Postgres support to Vite projects using the [@neondatabase/vite-plugin-postgres](https://www.npmjs.com/package/@neondatabase/vite-plugin-postgres) plugin. The plugin provisions a database and injects credentials into your environment file if needed. > The example below includes React, but you can use the Neon plugin with any Vite-compatible framework. **Configuration options:** | Option | Type | Description | Default | | ----------- | ------ | -------------------------------- | -------------- | | `env` | string | Path to the .env file | `.env` | | `envKey` | string | Name of the environment variable | `DATABASE_URL` | | `envPrefix` | string | Prefix for public env vars | `PUBLIC_` | | `seed` | object | Seeding config (optional) | not set | **`seed` object:** | Property | Type | Description | | -------- | ------ | --------------------------- | | `type` | string | Only `sql-script` supported | | `path` | string | Path to SQL file to execute | **Example config:** ```js import { postgres } from 'vite-plugin-db'; import react from '@vitejs/plugin-react'; import { defineConfig } from 'vite'; export default defineConfig({ plugins: [ postgres({ env: '.env.local', // Custom .env file (default: '.env') envKey: 'DATABASE_URL', // Env variable for connection string (default: 'DATABASE_URL') envPrefix: 'PUBLIC_', // Prefix for public environment variables seed: { type: 'sql-script', path: './schema.sql', // SQL file to run after DB creation }, }), react(), ], }); ``` > **Note**: The plugin exports a named export (postgres) instead of relying on the default export to improve auto-completion. **How the plugin works:** 1. When running `vite dev`, the plugin checks if the `envKey` (default: `DATABASE_URL`) exists in your environment (default: `.env`) file 2. If the environment variable exists, the plugin takes no action 3. If the environment variable is missing, the plugin: - Automatically creates a new Neon claimable database - Adds two connection strings to your environment file: - `DATABASE_URL` - Standard connection string - `DATABASE_URL_POOLER` - Connection pooler string - Includes the claimable URL as a comment and public variable in the environment file The plugin is inactive during production builds (`vite build`) to prevent changes to environment files and database provisioning in production environments. If `seed` is configured, the specified SQL script is executed after database creation. If an error occurs (such as a missing or invalid SQL file), an error message will be displayed. For more details, see the [Vite Plugin package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/vite-plugin-postgres). ## Claiming a database To persist a database beyond the 72-hour expiration period: 1. Access the claim URL provided during database creation 2. Sign in to an existing Neon account or create a new one 3. Follow the on-screen instructions to complete the claim process The claim URL is available: - On the Instagres interface where the connection string is displayed - As a comment and public claim variable in environment files (e.g., `.env`) when using the CLI - The public claim variable is used when executing `npx get-db claim` to claim the database, which launches the browser window ### Claim process details When claiming a project, you'll be asked to choose an organization to claim it into. Note that projects cannot be claimed into Vercel organizations. ## Use cases Instagres is designed for scenarios requiring rapid database provisioning: - Development and testing environments - Evaluation of Neon's capabilities before committing to an account - AI agent integration without authentication overhead - Quick prototyping sessions Note that provisioned databases expire after 72 hours unless claimed as described in the previous section. ## Default configuration The service uses the following default settings: | Parameter | Value | | ---------------- | ------------ | | Provider | AWS | | Region | eu-central-1 | | Postgres version | 17 | ## Technical implementation The Instagres service is built on Neon's [claimable database integration](https://neon.com/docs/workflows/claimable-database-integration), which provides APIs for creating projects and generating transfer requests. This allows the service to provision databases immediately while deferring account creation until users choose to claim their database. You can build similar experiences in your own application using the [claimable database APIs](https://neon.com/docs/workflows/claimable-database-integration). ## Resources - [Neondb CLI package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/neondb) - [Vite Plugin package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/vite-plugin-postgres) - Blog post: [Instagres: A Tool For Instant Postgres, No Login Needed](https://neon.com/blog/neon-launchpad) --- # Source: https://neon.com/llms/reference-neon-launchpad.txt # Neon Launchpad > The Neon Launchpad documentation outlines the setup and configuration process for deploying and managing Neon databases, detailing steps for creating, connecting, and scaling databases within the Neon platform. ## Source - [Neon Launchpad HTML](https://neon.com/docs/reference/neon-launchpad): The original HTML version of this documentation Neon Launchpad enables instant provisioning of a Postgres database without configuration or account creation. Built on Neon's serverless Postgres platform, it provides immediate database access for development and testing. Access it now at [neon.new](https://neon.new/). ## Core features The service provides the following capabilities: - Instant database provisioning with immediate connection string availability - Resource limits matching Neon's Free plan specifications - 72-hour database lifespan if not claimed - Option to claim databases with a unique claim ID and Neon account - Automatic database seeding with SQL scripts for schema and data initialization (via CLI or Vite plugin) ## Access methods ### Browser access 1. Navigate to [https://neon.new](https://neon.new/) 2. Select `Try in your browser`, which redirects to [https://neon.new/db](https://neon.new/db) 3. Receive an automatically generated connection string 4. Save the provided `Claim URL` to add this database to a Neon account later, or claim now ### Command-line interface Execute with your preferred package manager: Tab: npx ```bash npx get-db ``` Tab: yarn ```bash yarn dlx get-db ``` Tab: pnpm ```bash pnpm dlx get-db ``` Tab: bunx ```bash bunx get-db ``` Tab: deno ```bash deno run -A get-db ``` **CLI options:** | Option | Alias | Description | Default | | ------------------- | ----- | ------------------------------------- | -------------- | | `--yes` | `-y` | Skip prompts and use defaults | | | `--env ` | `-e` | Path to the .env file | `./.env` | | `--key ` | `-k` | Env var for connection string | `DATABASE_URL` | | `--prefix ` | `-p` | Prefix for generated public vars | `PUBLIC_` | | `--seed ` | `-s` | Path to SQL file to seed the database | not set | | `--help` | `-h` | Show help message | | **Examples:** ```bash # Basic usage: creates a new Neon database and writes credentials to .env npx get-db # Seed the database with a SQL file after creation npx get-db --seed ./init.sql # Use a custom .env file and environment variable key npx get-db --env ./my.env --key MY_DB_URL # Skip prompts and use defaults npx get-db --yes # Detects PUBLIC_NEON_LAUNCHPAD_CLAIM_URL (default) from your environment, # and opens the defined claim URL in your browser npx get-db claim ``` The CLI writes the connection string, claim URL, and expiration to the specified `.env` file and outputs them in the terminal. For example: ```txt # Claimable DB expires at: Sun, 05 Oct 2025 23:11:33 GMT # Claim it now to your account: https://neon.new/database/aefc1112-0419-323a-97d4-05254da94551 DATABASE_URL=postgresql://neondb_owner:npg_4zqVsO2sJeUS@ep-tiny-scene-bgmszqe1.c-2.eu-central-1.aws.neon.tech/neondb?channel_binding=require&sslmode=require DATABASE_URL_POOLER=postgresql://neondb_owner:npg_4zqVsO2sJeUS@ep-tiny-scene-bgmszqe1-pooler.c-2.eu-central-1.aws.neon.tech/neondb?channel_binding=require&sslmode=require PUBLIC_NEON_LAUNCHPAD_CLAIM_URL=https://neon.new/database/aefc1112-0419-323a-97d4-05254da94551 ``` For advanced SDK/API usage, see the [get-db CLI package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/get-db). ### Integration with development tools Add Postgres support to Vite projects using the [vite-plugin-db](https://www.npmjs.com/package/vite-plugin-db) plugin. The plugin provisions a database and injects credentials into your environment file if needed. > The example below includes React, but you can use the Neon plugin with any Vite-compatible framework. **Configuration options:** | Option | Type | Description | Default | | ----------- | ------ | -------------------------------- | -------------- | | `env` | string | Path to the .env file | `.env` | | `envKey` | string | Name of the environment variable | `DATABASE_URL` | | `envPrefix` | string | Prefix for public env vars | `VITE_` | | `seed` | object | Seeding config (optional) | not set | **`seed` object:** | Property | Type | Description | | -------- | ------ | --------------------------- | | `type` | string | Only `sql-script` supported | | `path` | string | Path to SQL file to execute | **Example config:** ```js import { postgres } from 'vite-plugin-db'; import react from '@vitejs/plugin-react'; import { defineConfig } from 'vite'; export default defineConfig({ plugins: [ postgres({ env: '.env.local', // Custom .env file (default: '.env') envKey: 'DATABASE_URL', // Env variable for connection string (default: 'DATABASE_URL') envPrefix: 'VITE_', // Prefix for public environment variables seed: { type: 'sql-script', path: './schema.sql', // SQL file to run after DB creation }, }), react(), ], }); ``` > **Note**: The plugin exports a named export (postgres) instead of relying on the default export to improve auto-completion. **How the plugin works:** 1. When running `vite dev`, the plugin checks if the `envKey` (default: `DATABASE_URL`) exists in your environment (default: `.env`) file 2. If the environment variable exists, the plugin takes no action 3. If the environment variable is missing, the plugin: - Automatically creates a new Neon claimable database - Adds two connection strings to your environment file: - `DATABASE_URL` - Standard connection string - `DATABASE_URL_POOLER` - Connection pooler string - Includes the claimable URL as a comment and public variable in the environment file The plugin is inactive during production builds (`vite build`) to prevent changes to environment files and database provisioning in production environments. If `seed` is configured, the specified SQL script is executed after database creation. If an error occurs (such as a missing or invalid SQL file), an error message will be displayed. For more details, see the [Vite Plugin package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/vite-plugin-db). ## Claiming a database To persist a database beyond the 72-hour expiration period: 1. Access the claim URL provided during database creation 2. Sign in to an existing Neon account or create a new one 3. Follow the on-screen instructions to complete the claim process The claim URL is available: - On the Neon Launchpad interface where the connection string is displayed - As a comment and public claim variable in environment files (e.g., `.env`) when using the CLI - The public claim variable is used when executing `npx get-db claim` to claim the database, which launches the browser window ### Claim process details When claiming a project, you'll be asked to choose an organization to claim it into. Note that projects cannot be claimed into Vercel organizations. ## Use cases Neon Launchpad is designed for scenarios requiring rapid database provisioning: - Development and testing environments - Evaluation of Neon's capabilities before committing to an account - AI agent integration without authentication overhead - Quick prototyping sessions Note that provisioned databases expire after 72 hours unless claimed as described in the previous section. ## Default configuration The service uses the following default settings: | Parameter | Value | | ---------------- | ------------ | | Provider | AWS | | Region | eu-central-1 | | Postgres version | 17 | ## Technical implementation The Neon Launchpad service is built on Neon's [claimable database integration](https://neon.com/docs/workflows/claimable-database-integration), which provides APIs for creating projects and generating transfer requests. This allows the service to provision databases immediately while deferring account creation until users choose to claim their database. You can build similar experiences in your own application using the [claimable database APIs](https://neon.com/docs/workflows/claimable-database-integration). ## Resources - [get-db CLI package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/get-db) - [Vite Plugin package on GitHub](https://github.com/neondatabase/neondb-cli/tree/main/packages/vite-plugin-db) - Blog post: [Neon Launchpad: A Tool For Instant Postgres, No Login Needed](https://neon.com/blog/neon-launchpad) --- # Source: https://neon.com/llms/reference-neondatabase-toolkit.txt # The @neondatabase/toolkit > The @neondatabase/toolkit documentation outlines the tools and commands available for managing and interacting with Neon databases, facilitating efficient database operations and maintenance tasks. ## Source - [The @neondatabase/toolkit HTML](https://neon.com/docs/reference/neondatabase-toolkit): The original HTML version of this documentation What you will learn: - What is the @neondatabase/toolkit - How to get started Related resources: - [TypeScript SDK for the Neon API](https://neon.com/docs/reference/typescript-sdk) - [Neon API Reference](https://neon.com/docs/reference/api-reference) - [Neon Serverless Driver](https://neon.com/docs/serverless/serverless-driver) - [Why we built @neondatabase/toolkit](https://neon.com/blog/why-neondatabase-toolkit) Source code: - [@neondatabase/toolkit](https://github.com/neondatabase/toolkit) - [@neon/toolkit (JSR)](https://jsr.io/@neon/toolkit) ## About the toolkit The [@neondatabase/toolkit](https://github.com/neondatabase/toolkit) ([@neon/toolkit](https://jsr.io/@neon/toolkit) on JSR) is a terse client that lets you spin up a Postgres database in seconds and run SQL queries. It includes both the [Neon TypeScript SDK](https://neon.com/docs/reference/typescript-sdk) and the [Neon Serverless Driver](https://github.com/neondatabase/serverless), making it an excellent choice for AI agents that need to quickly set up an SQL database or test environments where manually deploying a new database isn't practical. The primary goal of the toolkit is to abstract away the multi-step process of creating a project, retrieving its connection string, and then establishing a database connection. This makes it an excellent choice for: - **AI Agents:** An agent can spin up a dedicated database instance in seconds to perform a task, run SQL queries, and then tear it down, all within a single, streamlined workflow. - **Testing Environments:** Ideal for integration or end-to-end tests where a fresh, isolated database is required for each test run, ensuring no state is carried over. - **Demos and Prototyping:** Quickly create a live Postgres database to demonstrate a feature or prototype an idea without any manual setup in the Neon Console. **Note**: This is an experimental feature and is subject to change. **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for the @neondatabase/toolkit](https://neon.com/docs/ai/ai-rules-neon-toolkit) to help your AI assistant create, query, and destroy ephemeral Neon Postgres databases. ## Getting started ### Installation Install the `@neondatabase/toolkit` package into your project using your preferred package manager: Tab: npm ```bash npm install @neondatabase/toolkit ``` Tab: yarn ```bash yarn add @neondatabase/toolkit ``` Tab: pnpm ```bash pnpm add @neondatabase/toolkit ``` Tab: jsr ```bash deno add jsr:@neon/toolkit ``` ### Authentication The toolkit requires a Neon API key to interact with your account. 1. Log in to the [Neon Console](https://console.neon.tech/). 2. Navigate to [Account settings > API keys](https://console.neon.tech/app/settings/api-keys). 3. Click **Generate new API key**, give it a name, and copy the key. For security, it's best to use this key as an environment variable. ```bash export NEON_API_KEY="YOUR_API_KEY_FROM_NEON_CONSOLE" ``` ## Raw Methods > Feel free to skip to the [Complete Example](https://neon.com/docs/reference/neondatabase-toolkit#complete-example) section if you want to see everything in action. The toolkit offers a compact, useful API for handling the lifecycle of an ephemeral database. ### `new NeonToolkit(apiKey)` Initializes a new toolkit instance. - **`apiKey`** `(string)`: Your Neon API key. ```javascript import { NeonToolkit } from "@neondatabase/toolkit"; const toolkit = new NeonToolkit(process.env.NEON_API_KEY!); ``` ### `toolkit.createProject(projectOptions?)` Creates a new Neon project and returns a `ToolkitProject` object containing all the associated resources. - **`projectOptions`** `(object)` (optional): An object specifying the project's configuration, such as `name`, `region_id`, or `pg_version`. For further customization options, refer to the [API Reference](https://api-docs.neon.tech/reference/createproject). ```javascript // Create a project with default settings const project = await toolkit.createProject(); // Create a project with a specific name and Postgres version const customizedProject = await toolkit.createProject({ name: 'my-ai-agent-db', pg_version: 16, }); console.log('Project created with ID:', project.project.id); console.log('Connection string:', project.connectionURIs[0].connection_uri); ``` ### `toolkit.sql(project, query)` Executes an SQL query against the project's default database. This method uses the Neon Serverless Driver under the hood. - **`project`** `(ToolkitProject)`: The `ToolkitProject` object returned by `createProject`. The method automatically uses the connection string from this object. - **`query`** `(string)`: The SQL query string to execute. ```javascript const project = await toolkit.createProject(); // Create a table await toolkit.sql(project, `CREATE TABLE IF NOT EXISTS users (id SERIAL PRIMARY KEY, name TEXT);`); // Insert data and get the result const result = await toolkit.sql(project, `INSERT INTO users (name) VALUES ('Alex') RETURNING id;`); console.log(result); // [ { id: 1 } ] ``` ### `toolkit.deleteProject(project)` Deletes the Neon project. This is a destructive operation and will remove the project and all its branches, databases, and data. - **`project`** `(ToolkitProject)`: The `ToolkitProject` object returned by `createProject`. ```javascript const project = await toolkit.createProject(); // ... use the database await toolkit.deleteProject(project); console.log('Project deleted.'); ``` ## Complete Example This example demonstrates the full lifecycle: creating a project, running schema and data queries, and tearing down the project. ```javascript import { NeonToolkit } from '@neondatabase/toolkit'; async function main() { if (!process.env.NEON_API_KEY) { throw new Error('NEON_API_KEY environment variable is not set.'); } const toolkit = new NeonToolkit(process.env.NEON_API_KEY); console.log('Creating a new Neon project...'); const project = await toolkit.createProject({ name: 'toolkit-demo' }); console.log(`Project "${project.project.name}" created successfully.`); console.log("Creating 'users' table..."); await toolkit.sql( project, ` CREATE TABLE IF NOT EXISTS users ( id UUID PRIMARY KEY, name VARCHAR(255) NOT NULL, createdAt TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); ` ); console.log('Inserting a new user...'); await toolkit.sql( project, `INSERT INTO users (id, name) VALUES (gen_random_uuid(), 'Sam Smith')` ); console.log('Querying users...'); const users = await toolkit.sql(project, `SELECT name, createdAt FROM users`); console.log('Found users:', users); console.log('Deleting the project...'); await toolkit.deleteProject(project); console.log('Project deleted. Demo complete.'); } main().catch(console.error); ``` To run this example, save it as `index.js` and execute: ```bash NEON_API_KEY= node index.js ``` **Expected output:** ```text Creating a new Neon project... Project "toolkit-demo" created successfully. Creating 'users' table... Inserting a new user... Querying users... Found users: [ { name: "Sam Smith", createdat: 2025-09-18T12:15:35.276Z, } ] Deleting the project... Project deleted. Demo complete. ``` As you can see, the toolkit makes it incredibly easy to manage the lifecycle of a Neon database with just a few lines of code. The whole process from project creation to deletion is streamlined, allowing you to focus on your application's logic rather than the intricacies of database management. ## Accessing the underlying API client The toolkit is a convenience wrapper. For more advanced operations not covered by the toolkit's methods (like creating a branch, managing roles, or listing all your projects), you can access the full Neon TypeScript SDK instance via the `apiClient` property. ```javascript import { NeonToolkit } from "@neondatabase/toolkit"; const toolkit = new NeonToolkit(process.env.NEON_API_KEY!); const apiClient = toolkit.apiClient; // Now you have the full Neon API client // For example, listing all projects in your account: const { data } = await apiClient.listProjects({}); console.log("All projects in your account:", data.projects); ``` With the `apiClient`, you can perform any operation supported by the Neon API. For a complete guide on its capabilities, see the [TypeScript SDK for the Neon API](https://neon.com/docs/reference/typescript-sdk) documentation. As with all of our experimental features, changes are ongoing. If you have any feedback, we'd love to hear it. Let us know via the [Feedback](https://console.neon.tech/app/projects?modal=feedback) form in the Neon Console or our [feedback channel](https://discord.com/channels/1176467419317940276/1176788564890112042) on Discord. --- # Source: https://neon.com/llms/reference-python-sdk.txt # Python SDK for the Neon API > The document details the Python SDK for the Neon API, enabling users to interact programmatically with Neon's database services through Python, including installation instructions, usage examples, and API reference. ## Source - [Python SDK for the Neon API HTML](https://neon.com/docs/reference/python-sdk): The original HTML version of this documentation What you will learn: - What is the Neon Python SDK - Basic usage - Where to find the docs - Supported methods Related resources: - [Neon API Reference](https://neon.com/docs/reference/api-reference) Source code: - [Python wrapper for the Neon API (GitHub)](https://github.com/neondatabase/neon-api-python) - [Python wrapper for the Neon API (Python Package Index)](https://pypi.org/project/neon-api/) ## About the SDK Neon supports the [neon-api - Python client for the Neon API](https://pypi.org/project/neon-api/), a wrapper for the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). This SDK simplifies integration of Python applications with the Neon platform, providing methods to programmatically manage API keys, Neon projects, branches, databases, endpoints, roles, and operations. **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for the Neon Python SDK](https://neon.com/docs/ai/ai-rules-neon-python-sdk) to help your AI assistant generate better code when managing Neon resources with Python. ## Installation Installation of `neon_api` is easy, with `pip`: ```shell $ pip install neon-api ``` ## Usage ```python from neon_api import NeonAPI # Initialize the client. neon = NeonAPI(api_key='your_api_key') ``` ## Documentation Documentation for the `neon-api - Python SDK`, including a [Quickstart](https://neon-api-python.readthedocs.io/en/latest/#quickstart), can be found on **Read the Docs**. See [neon-api — Python client for the Neon API](https://neon-api-python.readthedocs.io/en/latest/#neon-api-python-client-for-the-neon-api). ## Methods of the `NeonAPI` Class - `me()`: Returns the current user. ### Manage API Keys - `api_keys()`: Returns a list of API keys. - `api_key_create(**json)`: Creates an API key. - `api_key_delete(key_id)`: Deletes a given API key. ### Manage Projects - `projects()`: Returns a list of projects. - `project(project_id)`: Returns a specific project. - `project_create(project_id, **json)`: Creates a new project. - `project_update(project_id, **json)`: Updates a given project. - `project_delete(project_id)`: Deletes a given project. - `project_permissions(project_id)`: Returns a list of permissions for a given project. - `project_permissions_grant(project_id, **json)`: Grants permissions to a given project. - `project_permissions_revoke(project_id, **json)`: Revokes permissions from a given project. - `connection_uri(project_id, database_name, role_name)`: Returns the connection string for a given project. ### Manage Branches - `branches(project_id)`: Returns a list of branches for a given project. - `branch(project_id, branch_id)`: Returns a specific branch. - `branch_create(project_id, **json)`: Creates a new branch. - `branch_update(project_id, branch_id, **json)`: Updates a given branch. - `branch_delete(project_id, branch_id)`: Deletes a given branch. - `branch_set_as_primary(project_id, branch_id)`: Sets a given branch as primary. ### Manage Databases - `databases(project_id, branch_id)`: Returns a list of databases for a given project and branch. - `database(project_id, branch_id, database_id)`: Returns a specific database. - `database_create(project_id, branch_id, **json)`: Creates a new database. - `database_update(project_id, branch_id, **json)`: Updates a given database. - `database_delete(project_id, branch_id, database_id)`: Deletes a given database. ### Manage Endpoints - `endpoints(project_id, branch_id)`: Returns a list of endpoints for a given project and branch. - `endpoint_create(project_id, branch_id, **json)`: Creates a new endpoint. - `endpoint_update(project_id, branch_id, endpoint_id, **json)`: Updates a given endpoint. - `endpoint_delete(project_id, branch_id, endpoint_id)`: Deletes a given endpoint. - `endpoint_start(project_id, branch_id, endpoint_id)`: Starts a given endpoint. - `endpoint_suspend(project_id, branch_id, endpoint_id)`: Suspends a given endpoint. ### Manage Roles - `roles(project_id, branch_id)`: Returns a list of roles for a given project and branch. - `role(project_id, branch_id, role_name)`: Returns a specific role. - `role_create(project_id, branch_id, role_name)`: Creates a new role. - `role_delete(project_id, branch_id, role_name)`: Deletes a given role. - `role_password_reveal(project_id, branch_id, role_name)`: Reveals the password for a given role. - `role_password_reset(project_id, branch_id, role_name)`: Resets the password for a given role. ### Manage Operations - `operations(project_id)`: Returns a list of operations for a given project. - `operation(project_id, operation_id)`: Returns a specific operation. ### Experimental - `consumption()`: Returns a list of project consumption metrics. _View the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) documentation for more information on the available endpoints and their parameters._ --- # Source: https://neon.com/llms/reference-sdk.txt # Neon SDKs > The Neon SDKs documentation outlines the available software development kits for integrating with Neon, detailing installation, configuration, and usage instructions to facilitate seamless interaction with Neon's database services. ## Source - [Neon SDKs HTML](https://neon.com/docs/reference/sdk): The original HTML version of this documentation There are several SDKs available for use with Neon. All are wrappers around the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api), providing methods to programmatically manage API keys, Neon projects, branches, databases, endpoints, roles, and operations. In addition to wrapping the Neon API, the `@neondatabase/toolkit` also packages the low-latency Neon Serverless Driver, which supports SQL queries over WebSockets and HTTP. ## Neon SDKs - [TypeScript SDK for the Neon API](https://neon.com/docs/reference/typescript-sdk): A Neon-supported TypeScript SDK for the Neon API - [Python SDK for the Neon API](https://neon.com/docs/reference/python-sdk): A Neon-supported Python SDK for the Neon API - [@neondatabase/toolkit](https://neon.com/docs/reference/neondatabase-toolkit): An SDK for AI Agents (and humans) that includes both the Neon TypeScript SDK and the Neon Serverless Driver ## Community SDKs **Note**: Community SDKs are not maintained or officially supported by Neon. Some features may be out of date, so use these SDKs at your own discretion. If you have questions about these SDKs, please contact the project maintainers. - [Go SDK for the Neon API](https://github.com/kislerdm/neon-sdk-go): A Go SDK for the Neon API - [Node.js and Deno SDK for the Neon API](https://github.com/paambaati/neon-js-sdk): A Node.js and Deno SDK for the Neon API --- # Source: https://neon.com/llms/reference-terraform.txt # Manage Neon with Terraform > The document "Manage Neon with Terraform" outlines how to use Terraform to automate and manage Neon database infrastructure, detailing configuration, deployment, and resource management processes specific to Neon's environment. ## Source - [Manage Neon with Terraform HTML](https://neon.com/docs/reference/terraform): The original HTML version of this documentation Terraform is an open-source infrastructure as code (IaC) tool that allows you to define and provision cloud resources in a declarative configuration language. By codifying infrastructure, Terraform enables consistent, repeatable, and automated deployments, significantly reducing manual errors. This guide will show you how to use **Terraform to manage your Neon projects**, including your branches, databases, and compute endpoints. By using Terraform with Neon, you get better control, can track changes, and automate your database setup. Neon sponsors the following community-developed Terraform provider for managing Neon Postgres platform resources: **Terraform Provider Neon - Maintainer: Dmitry Kisler** - [GitHub repository](https://github.com/kislerdm/terraform-provider-neon) - [Terraform Registry](https://registry.terraform.io/providers/kislerdm/neon/0.6.1) - [Terraform Registry Documentation](https://registry.terraform.io/providers/kislerdm/neon/latest/docs) **Note**: This provider is not maintained or officially supported by Neon. Use at your own discretion. If you have questions about the provider, please contact the project maintainer. ## Provider usage notes - **Provider upgrades**: When using `terraform init -upgrade` to update a custom Terraform provider, be aware that changes in the provider's schema or defaults can lead to unintended resource replacements. This may occur when certain attributes are altered or reset. For example, fields previously set to specific values might be reset to `null`, forcing the replacement of the entire resource. To avoid unintended resource replacements which can result in data loss: - Review the provider's changelog for any breaking changes that might affect your resources before upgrading to a new version. - For CI pipelines and auto-approved pull requests, only use `terraform init`. Running `terraform init -upgrade` should be done manually followed by plan reviews. - Run `terraform plan` before applying any changes to detect potential differences and review the behavior of resource updates. - Use [lifecycle protections](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy) on critical resources to ensure they're not recreated unintentionally. - Explicitly define all critical resource parameters in your Terraform configurations, even if they had defaults previously. - On Neon paid plans, you can enable branch protection to prevent unintended deletion of branches and projects. To learn more, see [Protected branches](https://neon.com/docs/guides/protected-branches). - **Provider maintenance**: As Neon enhances existing features and introduces new ones, the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api) will continue to evolve. These changes may not immediately appear in community-maintained Terraform providers. If you notice that a provider requires an update, please reach out to the maintainer by opening an issue or contributing to the provider's GitHub repository. ## Prerequisites Before you begin, ensure you have the following: 1. **Terraform CLI installed:** If you don't have Terraform installed, download and install it from the [official Terraform website](https://developer.hashicorp.com/terraform/install). The Neon provider requires Terraform version `1.14.x` or later. 2. **Neon Account:** You'll need a Neon account. If you don't have one, sign up at [neon.tech](https://console.neon.tech/signup). 3. **Neon API key:** Generate an API key from the Neon Console. Navigate to your Account Settings > API Keys. This key is required for the provider to authenticate with the Neon API. Learn more about creating API keys in [Manage API keys](https://neon.com/docs/manage/api-keys). ## Set up the Terraform Neon provider 1. **Create a project directory:** Create a new directory for your Terraform project and navigate into it. ```shell mkdir neon-terraform-project cd neon-terraform-project ``` 2. **Create a `main.tf` file:** This file will contain your Terraform configuration. Start by declaring the required Neon provider. ```terraform terraform { required_providers { neon = { source = "kislerdm/neon" } } } provider "neon" {} ``` 3. **Initialize terraform:** Run the `terraform init` command in your project directory. This command downloads and installs the Neon provider. ```shell terraform init ``` ## Configure authentication The Neon provider needs your Neon API key to manage resources. You can configure it in two ways: 1. **Directly in the provider block (Less secure):** For quick testing, you can **hardcode your API key** directly within `provider "neon"` block. However, this method isn't recommended for production environments or shared configurations. A more secure alternative is to retrieve the API key from a secrets management service like [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) or [HashiCorp Vault](https://developer.hashicorp.com/vault), and then update your provider block to reflect this. ```terraform provider "neon" { api_key = "" } ``` 2. **Using environment variables:** The provider will automatically use the `NEON_API_KEY` environment variable if set. ```shell export NEON_API_KEY="" ``` If the environment variable is set, you can leave the `provider "neon"` block empty: ```terraform provider "neon" {} ``` **Note**: The following sections primarily detail the creation of Neon resources. To manage existing resources, use the `terraform import` command. More information can be found in the [Importing Existing Resources](https://neon.com/docs/reference/terraform#import-existing-neon-resources) section. ## Manage Neon resources Now you can start defining Neon resources in your `main.tf` file. ### Managing projects **Warning**: Always set the `org_id` attribute when creating a `neon_project`. You can find your Organization ID in the Neon Console under Account Settings → Organization settings. Omitting `orgId` can cause resources to be created in the wrong organization or produce duplicate projects, and subsequent `terraform plan` / `terraform apply` runs may attempt destructive changes (including deletions). To avoid this, explicitly provide `orgId` when defining your project as shown in the example below. A Neon project is the top-level container for your Postgres databases, branches, and endpoints. ```terraform resource "neon_project" "my_app_project" { name = "my-application-project" pg_version = 16 region_id = "aws-us-east-1" org_id = "your-neon-organization-id" # Replace with your actual Org ID # free accounts have maximum retention window of 6 hours (21600 seconds) history_retention_seconds = 21600 # Configure default branch settings (optional) branch { name = "production" database_name = "app_db" role_name = "app_admin" } # Configure default endpoint settings (optional) default_endpoint_settings { autoscaling_limit_min_cu = 0.25 autoscaling_limit_max_cu = 1.0 # suspend_timeout_seconds = 300 } } ``` This configuration creates a new Neon project. **Key `neon_project` attributes:** - `name`: (Optional) Name of the project. - `pg_version`: (Optional) The major supported PostgreSQL version, such as 17. - `region_id`: (Optional) The region where the project will be created (e.g., `aws-us-east-1`). > For up-to-date information on available regions, see [Neon Regions](https://neon.com/docs/introduction/regions). - `org_id`: The Organization ID under which to create the project. - `history_retention_seconds`: (Optional) Duration in seconds to retain historical data for point-in-time recovery. Free plans have a maximum of 21600 seconds (6 hours). Default is 86400 seconds (24 hours) for paid plans. - `branch {}`: (Optional) Block to configure the default primary branch. **Output project details:** You can output computed values like the project ID or connection URI: ```terraform output "project_id" { value = neon_project.my_app_project.id } output "project_connection_uri" { description = "Default connection URI for the primary branch (contains credentials)." value = neon_project.my_app_project.connection_uri sensitive = true } output "project_default_branch_id" { value = neon_project.my_app_project.default_branch_id } output "project_database_user" { value = neon_project.my_app_project.database_user } ``` For more attributes and options on managing projects, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/project.md). ### Managing branches You can create branches from the primary branch or any other existing branch. ```terraform resource "neon_branch" "dev_branch" { project_id = neon_project.my_app_project.id name = "feature-x-development" parent_id = neon_project.my_app_project.default_branch_id # Branch from the project's primary branch # Optional: Create a protected branch # protected = "yes" # Optional: Create from a specific LSN or timestamp of the parent # parent_lsn = "..." # parent_timestamp = 1678886400 # Unix epoch } ``` **Key `neon_branch` attributes:** - `project_id`: (Required) ID of the parent project. - `name`: (Optional) Name for the new branch. - `parent_id`: (Optional) ID of the parent branch. If not specified, defaults to the project's primary branch. - `protected`: (Optional, String: "yes" or "no") Set to protect the branch. - `parent_lsn`: (Optional) LSN of the parent branch to create from. - `parent_timestamp`: (Optional) Timestamp of the parent branch to create from. > `protected` attribute is only available for paid plans. It allows you to protect branches from deletion or modification. For more attributes and options on managing branches, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/branch.md). ### Managing endpoints Endpoints provide connection strings to access your branches. Each branch can have multiple read-only endpoints but only one read-write endpoint. Before creating an endpoint, you must first create a **branch** for it to connect to. Here's how to create a read-write endpoint for your `dev_branch`: ```terraform resource "neon_endpoint" "dev_endpoint" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id type = "read_write" # "read_write" or "read_only" autoscaling_limit_min_cu = 0.25 autoscaling_limit_max_cu = 0.5 # suspend_timeout_seconds = 600 # Optional: Enable connection pooling # pooler_enabled = true } output "dev_endpoint_host" { value = neon_endpoint.dev_endpoint.host } ``` **Key `neon_endpoint` attributes:** - `project_id`: (Required) ID of the parent project. - `branch_id`: (Required) ID of the branch this endpoint connects to. - `type`: (Optional) `read_write` (default) or `read_only`. A branch can only have one `read_write` endpoint. - `autoscaling_limit_min_cu`/`autoscaling_limit_max_cu`: (Optional) Compute units for autoscaling. - `suspend_timeout_seconds`: (Optional) Inactivity period before suspension. Only available for paid plans. - `pooler_enabled`: (Optional) Enable connection pooling. **Note**: It is not possible currently to change the endpoint type after creation. The `type` attribute is immutable, meaning you cannot modify it once the endpoint is created. This includes changing from `read_write` to `read_only` or vice versa. This is a limitation of the Neon API and the provider's current implementation. You must destroy the existing endpoint and create a new one with the desired type. For more attributes and options on managing endpoints, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/endpoint.md) ### Managing roles Roles (users) are managed per branch. Before creating a role, ensure you have a branch created. Follow the [Managing Branches](https://neon.com/docs/reference/terraform#managing-branches) section for details. ```terraform resource "neon_role" "app_user" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "application_user" } output "app_user_password" { value = neon_role.app_user.password sensitive = true } ``` **Key `neon_role` attributes:** - `project_id`: (Required) ID of the parent project. - `branch_id`: (Required) ID of the branch for this role. - `name`: (Required) Name of the role. - `password`: (Computed, Sensitive) The generated password for the role. For more attributes and options on managing roles, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/role.md) ### Managing databases Databases are also managed per branch. Follow the [Managing Branches](https://neon.com/docs/reference/terraform#managing-branches) section for details on creating a branch. ```terraform resource "neon_database" "service_db" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "service_specific_database" owner_name = neon_role.app_user.name } ``` **Key `neon_database` attributes:** - `project_id`: (Required) ID of the parent project. - `branch_id`: (Required) ID of the branch for this database. - `name`: (Required) Name of the database. - `owner_name`: (Required) Name of the role that will own this database. For more attributes and options on managing databases, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/database.md) ### Managing API keys You can manage Neon API keys themselves using Terraform. ```terraform resource "neon_api_key" "ci_cd_key" { name = "automation-key-for-ci" } output "ci_cd_api_key_value" { description = "The actual API key token." value = neon_api_key.ci_cd_key.key sensitive = true } ``` **Key `neon_api_key` attributes:** - `name`: (Required) A descriptive name for the API key. - `key`: (Computed, Sensitive) The generated API key token. ### Advanced: Project permissions Share project access with other users. ```terraform resource "neon_project_permission" "share_with_colleague" { project_id = neon_project.my_app_project.id grantee = "colleague@example.com" } ``` ### Advanced: VPC endpoint management (for Neon private networking) These resources are used for organizations requiring private networking. #### Assign VPC endpoint to organization ```terraform resource "neon_vpc_endpoint_assignment" "org_vpc_endpoint" { org_id = "your-neon-organization-id" # Replace with your actual Org ID region_id = "aws-us-east-1" # Neon region ID vpc_endpoint_id = "vpce-xxxxxxxxxxxxxxxxx" # Your AWS VPC Endpoint ID label = "main-aws-vpc-endpoint" } ``` For more attributes and options on managing VPC endpoints, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/vpc_endpoint_assignment.md) #### Restrict project to VPC endpoint ```terraform resource "neon_vpc_endpoint_restriction" "project_to_vpc" { project_id = neon_project.my_app_project.id vpc_endpoint_id = neon_vpc_endpoint_assignment.org_vpc_endpoint.vpc_endpoint_id label = "restrict-my-app-project-to-vpc" } ``` For more attributes and options on managing VPC endpoint restrictions, refer to the [Provider's documentation](https://github.com/kislerdm/terraform-provider-neon/blob/master/docs/resources/vpc_endpoint_restriction.md) ## Apply the configuration Once you have defined your resources: 1. **Format and validate:** ```shell terraform fmt terraform validate ``` 2. **Plan:** Run `terraform plan` to see what actions Terraform will take. This command shows you the resources that will be created, modified, or destroyed without making any changes. Review the output carefully to ensure it matches your expectations. ```shell terraform plan -out=tfplan ``` 3. **Apply:** Run `terraform apply` to create the resources in Neon. ```shell terraform apply tfplan ``` Terraform will ask for confirmation before proceeding with the changes. Type `yes` to confirm. You have now successfully created and managed Neon resources using Terraform! You can continue to modify your `main.tf` file to add, change, or remove resources as needed. After making changes, always run `terraform plan` to review the changes before applying them. ## Import existing Neon resources If you have existing Neon resources that were created outside of Terraform (e.g., via the Neon Console or API directly), you can bring them under Terraform's management. This allows you to manage their lifecycle with code moving forward. Terraform offers two primary ways to do this: using the `terraform import` CLI command or, for Terraform `1.5.0` and later, using declarative `import` blocks directly in your configuration. Both methods involve telling Terraform about an existing resource and associating it with a resource block in your configuration. ### Set up your Terraform configuration Before you can import any resources, ensure your Terraform environment is configured for the Neon provider: 1. **Define the provider:** Make sure you have the `neon` provider declared in your `main.tf` or a dedicated `providers.tf` file. ```terraform terraform { required_providers { neon = { source = "kislerdm/neon" } } } provider "neon" {} ``` 2. **Initialize terraform:** If you haven't already, or if you've just added the provider configuration, run: ```shell terraform init ``` This downloads the Neon provider plugin. **Warning** Important: Provider Upgrades: Avoid using `terraform init -upgrade` in CI pipelines and auto-approved pull requests, as this can lead to unintended resource replacements and data loss if there are breaking changes or major version jumps. Instead, use `terraform init` in your automated workflows. Running `terraform init -upgrade` should always be done manually, followed by plan reviews. For additional guidance, see [Important usage notes](https://neon.com/docs/reference/terraform#provider-usage-notes). 3. **Configure authentication:** Follow the authentication steps mentioned in [Configure Authentication](https://neon.com/docs/reference/terraform#configure-authentication) to ensure Terraform can communicate with your Neon account. ### Neon resource IDs for import When importing Neon resources, you need to know the specific ID format for each resource type. Always refer to the "Import" section of the specific resource's documentation page on the [Provider's GitHub: `kislerdm/terraform-provider-neon`](https://github.com/kislerdm/terraform-provider-neon/tree/master/docs/resources) for the exact ID format. Here are some common formats for different Neon resources: - **`neon_project`:** Uses the Project ID (e.g., `damp-recipe-88779456`). - **`neon_branch`:** Uses the Branch ID (e.g., `br-orange-bonus-a4v00wjl`). - **`neon_endpoint`:** Uses the Endpoint ID (e.g., `ep-blue-cell-a4xzunwf`). - **`neon_role`:** Uses a composite ID: `//` (e.g., `damp-recipe-88779456/br-orange-bonus-a4v00wjl/application_user`). - **`neon_database`:** Uses a composite ID: `//` (e.g., `damp-recipe-88779456/br-orange-bonus-a4v00wjl/service_specific_database`). - **`neon_api_key` and `neon_jwks_url`:** These resources do not support import. You'll need to recreate them using Terraform if you want to manage them via IaC. ### Order of import for dependent resources When importing resources that depend on each other, it's best practice to import them in the order of their dependencies. This helps ensure that Terraform can correctly understand relationships and that your HCL resource blocks can reference already-imported parent resources. A common order for importing Neon resources is: ```plaintext Project -> Branch -> Endpoint -> Role -> Database ``` Depending on your preference and the version of Terraform you are using, you can choose between two methods to import existing Neon resources into Terraform. Follow [Method 1](https://neon.com/docs/reference/terraform#method-1-using-the-terraform-import-cli-command) if you prefer the traditional CLI import command, or [Method 2](https://neon.com/docs/reference/terraform#method-2-using-import-blocks-terraform-150) if you want to use the newer declarative `import` blocks introduced in Terraform `1.5.0`. ### Method 1: Using the `terraform import` CLI command For each Neon resource you want to import, you'll generally follow these two steps: 1. **Write a resource block:** Add a corresponding `resource` block to your Terraform configuration files (e.g., `main.tf`). This block tells Terraform how you _want_ the resource to be configured. You might not know all the attributes perfectly upfront; Terraform will populate many of them from the actual state of the resource during the import. 2. **Run `terraform import`:** Execute the import command, which takes the Terraform resource address and the Neon-specific ID of the existing resource. ```shell terraform import ``` #### Example: Importing the previously defined resources In this example, we'll import the resources we defined earlier in the [Manage Neon Resources](https://neon.com/docs/reference/terraform#manage-neon-resources) section. This needs a project, a branch, an endpoint, a role, and a database already created in your Neon account. These resources will now be imported into a new Terraform configuration. ##### Define the HCL resource blocks In your `main.tf` file, define the resource blocks for the existing resources. You can start with minimal definitions, as Terraform will populate the actual values during the import process. You primarily need to define the resource type and a name for Terraform to use. Terraform will populate the actual attribute values from the live resource into its state file during the import. You'll then use `terraform plan` to see these and update your HCL to match or to define your desired state. For required attributes (like `project_id` for a branch), you'll either need to hardcode the known ID or reference a resource that will also be imported. ```terraform terraform { required_providers { neon = { source = "kislerdm/neon" } } } provider "neon" {} # --- Project --- resource "neon_project" "my_app_project" {} # --- Development Branch --- # Requires project_id. We'll reference the project we're about to import. # The actual value of neon_project.my_app_project.id will be known after its import. resource "neon_branch" "dev_branch" { project_id = neon_project.my_app_project.id name = "feature-x-development" } # --- Development Branch Endpoint --- # Requires project_id and branch_id. resource "neon_endpoint" "dev_endpoint" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id } # --- Application User Role on Development Branch --- # Requires project_id, branch_id, and name. resource "neon_role" "app_user" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "application_user" } # --- Service Database on Development Branch --- # Requires project_id, branch_id, name, and owner_name. resource "neon_database" "service_db" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "service_specific_database" owner_name = neon_role.app_user.name } ``` Here's a breakdown of the minimal HCL and why certain attributes are included: - **`neon_project.my_app_project`**: - This block defines the Terraform resource for your main Neon project. - No attributes are strictly required _in the HCL_ for the import command itself, as the project is imported using its unique Neon Project ID. Adding a `name` attribute matching the existing project can aid readability but isn't essential for the import operation. - **`neon_branch.dev_branch`**: - This defines the Terraform resource for your development branch. - It requires `project_id` in the HCL to link it to the (to-be-imported) project resource within Terraform. - The `name` attribute should also be specified in the HCL, matching the existing branch's name, as it's a key identifier. - The branch is imported using its unique Neon Branch ID. - **`neon_endpoint.dev_endpoint`**: - This block defines the Terraform resource for the endpoint on your development branch. - It requires both `project_id` and `branch_id` in the HCL to correctly associate it with the imported project and development branch resources within Terraform. - Other attributes like `type` (which defaults if unspecified) or autoscaling limits will be read from the live resource during import. - The endpoint is imported using its unique Neon Endpoint ID. - **`neon_role.app_user`**: - This defines the Terraform resource for an application user role. - The HCL requires `project_id` and `branch_id` to link to the respective imported Terraform resources. - The `name` attribute must be specified in the HCL and match the existing role's name. - **`neon_database.service_db`**: - This defines the Terraform resource for a service-specific database. - The HCL requires `project_id` and `branch_id` to link to the imported Terraform resources. - The `name` attribute must be specified in the HCL and match the existing database's name. - The `owner_name` should also be included, linking to the Terraform role resource (e.g., `neon_role.app_user.name`) that owns this database. All other configurable attributes will be populated into Terraform's state file from the live Neon resource during the `terraform import` process. You will then refine your HCL by reviewing the `terraform plan` output. #### Run the import commands in order 1. **Import the project:** ```shell terraform import neon_project.my_app_project "actual_project_id_from_neon" ``` You can retrieve the project ID via Neon Console/CLI/API. Learn more: [Manage projects](https://neon.com/docs/manage/projects#project-settings) Example output: ```shell terraform import neon_project.my_app_project damp-recipe-88779456 ``` ```text neon_project.my_app_project: Importing from ID "damp-recipe-88779456"... neon_project.my_app_project: Import prepared! Prepared neon_project for import neon_project.my_app_project: Refreshing state... [id=damp-recipe-88779456] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. ``` 2. **Import the development branch:** ```shell terraform import neon_branch.dev_branch "actual_dev_branch_id_from_neon" ``` You can retrieve the branch ID via Neon Console/CLI/API. Learn more: [Manage branches](https://neon.com/docs/manage/branches) The following image shows the branch ID in the Neon Console: Example output: ```shell terraform import neon_branch.dev_branch br-orange-bonus-a4v00wjl ``` ```text neon_branch.dev_branch: Importing from ID "br-orange-bonus-a4v00wjl"... neon_branch.dev_branch: Import prepared! Prepared neon_branch for import neon_branch.dev_branch: Refreshing state... [id=br-orange-bonus-a4v00wjl] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. ``` 3. **Import the development compute endpoint:** ```shell terraform import neon_endpoint.dev_endpoint "actual_dev_endpoint_id_from_neon" ``` You can retrieve the endpoint ID via Neon Console/CLI/API. Learn more: [Manage computes](https://neon.com/docs/manage/computes). The following image shows the endpoint ID in the Neon Console: Example output: ```shell terraform import neon_endpoint.dev_endpoint ep-blue-cell-a4xzunwf ``` ```text neon_endpoint.dev_endpoint: Importing from ID "ep-blue-cell-a4xzunwf"... neon_endpoint.dev_endpoint: Import prepared! Prepared neon_endpoint for import neon_endpoint.dev_endpoint: Refreshing state... [id=ep-blue-cell-a4xzunwf] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. ``` 4. **Import the application user role:** ```shell terraform import neon_role.app_user "actual_project_id_from_neon/actual_dev_branch_id_from_neon/application_user" ``` > Replace `application_user` with the actual name of the role you want to import. Example output: ```shell terraform import neon_role.app_user "damp-recipe-88779456/br-orange-bonus-a4v00wjl/application_user" ``` ```text neon_role.app_user: Importing from ID "damp-recipe-88779456/br-orange-bonus-a4v00wjl/application_user"... neon_role.app_user: Import prepared! Prepared neon_role for import neon_role.app_user: Refreshing state... [id=damp-recipe-88779456/br-orange-bonus-a4v00wjl/application_user] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. ``` 5. **Import the service database:** ```shell terraform import neon_database.service_db "actual_project_id_from_neon/actual_dev_branch_id_from_neon/service_specific_database" ``` > Replace `service_specific_database` with the actual name of the database you want to import. Example output: ```shell terraform import neon_database.service_db "damp-recipe-88779456/br-orange-bonus-a4v00wjl/service_specific_database" ``` ```text neon_database.service_db: Importing from ID "damp-recipe-88779456/br-orange-bonus-a4v00wjl/service_specific_database"... neon_database.service_db: Import prepared! Prepared neon_database for import neon_database.service_db: Refreshing state... [id=damp-recipe-88779456/br-orange-bonus-a4v00wjl/service_specific_database] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. ``` After importing all resources, your Terraform state file (`terraform.tfstate`) will now contain the imported resources, and you can manage them using Terraform. Follow the [Reconcile your HCL with the imported state](https://neon.com/docs/reference/terraform#reconcile-your-hcl-with-the-imported-state) section to update your HCL files with the attributes that were populated during the import. ### Method 2: Using `import` Blocks (Terraform 1.5.0+) Terraform version 1.5.0 and later introduced a more declarative way to import existing infrastructure using `import` blocks directly within your configuration files. This method keeps the import definition alongside your resource configuration and makes the import process part of your standard `plan` and `apply` workflow. **The process with `import` Blocks:** For each existing Neon resource you want to bring under Terraform management, you'll define two blocks in your `.tf` file: - A standard `resource "resource_type" "resource_name" {}` block. For the initial import, this block can be minimal. It primarily tells Terraform the type and name of the resource in your configuration. - An `import {}` block: - `to = resource_type.resource_name`: This refers to the Terraform address of the `resource` block you defined above. - `id = "neon_specific_id"`: This is the actual ID of the resource as it exists in Neon (e.g., project ID, branch ID, or composite ID for roles/databases). **Example using `import` blocks:** In this example, we'll import the resources we defined earlier in the [Manage Neon Resources](https://neon.com/docs/reference/terraform#manage-neon-resources) section. This needs a project, a branch, an endpoint, a role, and a database already created in your Neon account. These resources will now be imported into a new Terraform configuration. Let's say we have the following existing Neon resources and their IDs: - Project `my_app_project` ID: `damp-recipe-88779456` - Branch `dev_branch` ID: `br-orange-bonus-a4v00wjl` - Endpoint `dev_endpoint` ID: `ep-blue-cell-a4xzunwf` - Role `application_user` - Database `service_specific_database` You would add the following to your `main.tf`: ```terraform terraform { required_providers { neon = { source = "kislerdm/neon" } } } provider "neon" { # API key configured via environment variable or directly } # --- Project Import --- import { to = neon_project.my_app_project id = "damp-recipe-88779456" # Replace with your actual Project ID } resource "neon_project" "my_app_project" { # Minimal definition for import. # After import and plan, you'll populate this with actual/desired attributes. } # --- Development Branch Import --- import { to = neon_branch.dev_branch id = "br-orange-bonus-a4v00wjl" # Replace with your actual Branch ID } resource "neon_branch" "dev_branch" { project_id = neon_project.my_app_project.id # Links to the TF resource name = "feature-x-development" # Should match existing branch name } # --- Development Branch Endpoint Import --- import { to = neon_endpoint.dev_endpoint id = "ep-blue-cell-a4xzunwf" # Replace with your actual Endpoint ID } resource "neon_endpoint" "dev_endpoint" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id # Links to the TF resource } # --- Application User Role on Development Branch Import --- import { to = neon_role.app_user # ID format: project_id/branch_id/role_name id = "damp-recipe-88779456/br-orange-bonus-a4v00wjl/application_user" } resource "neon_role" "app_user" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "application_user" # Must match existing role name } # --- Service Database on Development Branch Import --- import { to = neon_database.service_db # ID format: project_id/branch_id/name id = "damp-recipe-88779456/br-orange-bonus-a4v00wjl/service_specific_database" } resource "neon_database" "service_db" { project_id = neon_project.my_app_project.id branch_id = neon_branch.dev_branch.id name = "service_specific_database" # Must match existing database name owner_name = neon_role.app_user.name # Links to the TF role resource } ``` **Important**: You need to replace the IDs in the `import` blocks with the actual IDs of your existing Neon resources. The `to` field in each `import` block refers to the corresponding `resource` block defined in your configuration. The above configuration is a minimal example to get you started with the import process. ### Reconcile your HCL with the imported state After importing your resources using either method, you need to ensure that your HCL configuration accurately reflects the current state of the imported resources. This is an iterative process where you will: 1. **Run `terraform plan`:** ```shell terraform plan ``` 2. **Understanding the plan output:** The plan might show: - **Attributes to be added to your HCL:** Terraform will identify attributes present in the imported state (e.g., `pg_version`, `region_id`, `default_endpoint_settings` for a project) that are not yet explicitly in your HCL `resource` blocks. - **"Update in-place" actions:** You might see actions like `~ update in-place` for some resources, even if no actual value in Neon is changing. For example, for `neon_endpoint`, you might see `+ branch_id = "your-branch-id"`. This is often because Terraform is now resolving a reference (like `neon_branch.dev_branch.id`) to its concrete value and wants to explicitly set this in its managed configuration. It's a reconciliation step and usually safe to apply. 3. **Update your HCL (`main.tf`):** Carefully review the output of `terraform plan`. Your primary goal is to update your HCL `resource` blocks to accurately match the actual, imported state of your resources, or to define your desired state if you intend to make changes. Copy the relevant attributes and their values from the plan output into your HCL. 4. **Repeat `terraform plan`:** After updating your HCL, run `terraform plan` again. Continue this iterative process-reviewing the plan and updating your HCL-until `terraform plan` shows "No changes. Your infrastructure matches the configuration." or only shows changes you intentionally want to make. This iterative approach ensures your Terraform configuration accurately reflects either the current state or your intended desired state for the imported resources. ### Verify and reconcile Once all attributes are set correctly, you can run `terraform plan` to see if any changes are needed. You should be seeing output similar to: ```text No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. ``` We can now be sure that the resources are managed by Terraform. You can now proceed to make changes to your infrastructure using Terraform. ## Destroying resources To remove the resources managed by Terraform: ```shell terraform destroy ``` Terraform will ask for confirmation before deleting the resources. ## Example application The following example application demonstrates how to set up Terraform, connect to a Neon Postgres database, and perform a Terraform run that inserts data. It covers how to: - Use Go's `os/exec` package to run Terraform commands - Write a Go test function to validate Terraform execution - Execute Terraform commands such as `init`, `plan`, and `apply` - [Neon Postgres with Terraform and Go](https://github.com/mattmajestic/go-terraform): Run Terraform commands and test Terraform configurations with Go View the **YouTube tutorial**: [Neon Postgres for Terraform with Go](https://www.youtube.com/watch?v=Pw38lgfbX0s). --- # Source: https://neon.com/llms/reference-typescript-sdk.txt # TypeScript SDK for the Neon API > The TypeScript SDK for the Neon API documentation details how to integrate and interact with Neon's API using TypeScript, offering guidance on installation, configuration, and usage of the SDK for efficient database management. ## Source - [TypeScript SDK for the Neon API HTML](https://neon.com/docs/reference/typescript-sdk): The original HTML version of this documentation What you will learn: - What is the Neon TypeScript SDK - How to get started Related resources: - [Neon API Reference](https://neon.com/docs/reference/api-reference) Source code: - [@neondatabase/api-client on npm](https://www.npmjs.com/package/@neondatabase/api-client) ## About the SDK Neon supports the [@neondatabase/api-client](https://www.npmjs.com/package/@neondatabase/api-client) library, which is a wrapper for the [Neon API](https://api-docs.neon.tech/reference/getting-started-with-neon-api). The SDK provides a convenient way to interact with the Neon API using TypeScript. You can use the Neon TypeScript SDK to manage your Neon projects, branches, databases, compute endpoints, roles, and more programmatically. The SDK abstracts the underlying API requests, authentication, and error handling, allowing you to focus on building applications that interact with Neon resources. The Neon TypeScript SDK allows you to manage: - [**API Keys:**](https://neon.com/docs/manage/api-keys) Create, list, and revoke API keys for secure access to the Neon API. - [**Projects:**](https://neon.com/docs/manage/projects) Create, list, update, and delete Neon projects. - [**Branches:**](https://neon.com/docs/manage/branches) Manage branches, including creation, deletion, restoration, and schema management. - [**Databases:**](https://neon.com/docs/manage/databases) Create, list, update, and delete databases within your branches. - [**Compute Endpoints:**](https://neon.com/docs/manage/computes) Manage compute endpoints, including creation, scaling, suspension, and restart. - [**Roles:**](https://neon.com/docs/manage/roles) Create, list, update, and delete Postgres roles within your branches. - [**Operations:**](https://neon.com/docs/manage/operations) Monitor and track the status of asynchronous operations performed on your Neon resources. - [**Organizations:**](https://neon.com/docs/manage/orgs-api) Manage organization settings, API keys, and members (for Neon organizational accounts). - [**Consumption Metrics:**](https://neon.com/docs/guides/consumption-metrics) Retrieve usage metrics for your account and projects to monitor resource consumption. **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for the Neon TypeScript SDK](https://neon.com/docs/ai/ai-rules-neon-typescript-sdk) to help your AI assistant generate better code when managing Neon resources programmatically. ## Quick Start This guide walks you through installing the SDK, setting up authentication, and executing your first API call to retrieve a list of your Neon projects. ### Installation Install the `@neondatabase/api-client` package into your project using your preferred package manager: Tab: npm ```bash npm install @neondatabase/api-client ``` Tab: yarn ```bash yarn add @neondatabase/api-client ``` Tab: pnpm ```bash pnpm add @neondatabase/api-client ``` ### Authentication Setup Authentication with the Neon API is handled through API keys. Follow these steps to obtain and configure your API key: - Log in to the [Neon Console](https://console.neon.tech/) - Navigate to [Account settings > API keys](https://console.neon.tech/app/settings/api-keys). - Click Generate new API key. - Enter a descriptive Name (e.g., "neon-typescript-sdk-demo") for your key and click Create. For this quick start, we'll set the API key as an environment variable: ```bash export NEON_API_KEY="YOUR_API_KEY_FROM_NEON_CONSOLE" ``` Replace `YOUR_API_KEY_FROM_NEON_CONSOLE` with the API key you copied from the Neon Console. ## Examples Let's create a simple TypeScript file to list your Neon projects using the SDK. ### List Projects Create a new file named `list-projects.ts` in your project directory and add the following code: ```typescript import { createApiClient } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function listNeonProjects() { try { const response = await apiClient.listProjects({}); console.log(response.data.projects); } catch (error) { console.error('Error listing projects:', error); } } listNeonProjects(); ``` Execute the TypeScript file using [`tsx`](https://tsx.is) (or compile to JavaScript and run with `node`) ```bash tsx list-projects.ts ``` If your API key is correctly configured, you should see a list of your Neon projects printed to your console, similar to this: ```json [ { "id": "wandering-heart-70814840", "platform_id": "aws", "region_id": "aws-sa-east-1", "name": "test-project", "provisioner": "k8s-neonvm", "default_endpoint_settings": { "autoscaling_limit_min_cu": 0.25, "autoscaling_limit_max_cu": 0.25, "suspend_timeout_seconds": 0 }, "settings": { "allowed_ips": [Object], "enable_logical_replication": false, "maintenance_window": [Object], "block_public_connections": false, "block_vpc_connections": false }, "pg_version": 16, "proxy_host": "sa-east-1.aws.neon.tech", "branch_logical_size_limit": 512, "branch_logical_size_limit_bytes": 536870912, "store_passwords": true, "active_time": 304, "cpu_used_sec": 78, "creation_source": "console", "created_at": "2025-02-28T07:14:35Z", "updated_at": "2025-02-28T07:54:53Z", "synthetic_storage_size": 34149464, "quota_reset_at": "2025-03-01T00:00:00Z", "owner_id": "91cbdacd-06c2-49f5-bacf-78b9463c81ca", "compute_last_active_at": "2025-02-28T07:54:49Z" }, .. ] ``` ### Create a Project You can use the SDK to create a new Neon project. Here's an example of how to create a project and retrieve the connection string: ```typescript import { createApiClient } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function createNeonProject(projectName: string) { try { const response = await apiClient.createProject({ project: { name: projectName, region_id: 'aws-us-east-1', pg_version: 17, }, }); console.log('Project created:', response.data.project); console.log('Project ID:', response.data.project.id); console.log('Database connection string:', response.data.connection_uris[0].connection_uri); } catch (error) { console.error('Error creating project:', error); throw error; } } // Example usage: Create a project named "test-project" createNeonProject('test-project').catch((error) => { console.error('Error creating project:', error.message); }); ``` #### Key points: - The `region_id` parameter specifies the cloud region where the project will be hosted. You can find the list of supported regions at [Neon Regions](https://neon.com/docs/introduction/regions). - The `pg_version` parameter specifies the major supported version of Postgres to use in the project. ### Create a Branch You can use the SDK to create a new branch within a Neon project. Here's an example of how to create a branch: ```typescript import { createApiClient, EndpointType } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function createNeonBranch(projectId: string, branchName: string, parentBranchId?: string) { try { const response = await apiClient.createProjectBranch(projectId, { branch: { name: branchName, parent_id: parentBranchId, // Optional: Specify a source branch. If omitted, the default branch will be used }, endpoints: [ { type: EndpointType.ReadWrite, // If you need read-only access, use EndpointType.ReadOnly, // Optional: Specify the number of compute units (CU) for the endpoint. If omitted, the default value is 0.25 for both min and max. // autoscaling_limit_min_cu: 0.25, // autoscaling_limit_max_cu: 1, }, ], }); console.log('Branch created:', response.data.branch); } catch (error) { console.error('Error creating branch:', error); throw error; } } // Example usage: Create a branch named "dev-1" in the project with ID "your-project-id" createNeonBranch('your-project-id', 'dev-1').catch((error) => { console.error('Error creating branch:', error.message); }); ``` #### Key points: - `parent_id` (optional): Specifies the branch to branch from. If omitted, the project's default branch is used. - `EndpointType`: Enum to define endpoint type (`ReadWrite` or `ReadOnly`). - Compute Unit (CU) customization (optional): Control compute size using `autoscaling_limit_min_cu` and `autoscaling_limit_max_cu`. Refer to [Compute size and autoscaling configuration](https://neon.com/docs/manage/computes#compute-size-and-autoscaling-configuration) for available options. ### List Branches You can use the SDK to list branches within a Neon project. Here's an example of how to list branches: ```typescript import { createApiClient } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function listNeonBranches(projectId: string) { try { const response = await apiClient.listProjectBranches({ projectId }); console.log('Branches:', response.data.branches); } catch (error) { console.error('Error listing branches:', error); throw error; } } // Example usage: List branches in the project with ID "your-project-id" listNeonBranches('your-project-id').catch((error) => { console.error('Error listing branches:', error.message); }); ``` #### Key points: - The `projectId` parameter specifies the ID of the project for which you want to list branches. - The `listProjectBranches` method returns a list of branches within the specified project. Each branch object contains details like `id`, `name`, `created_at`, and more. ### Create a Database You can use the SDK to create a new database within a Neon branch. Here's an example of how to create a database: ```typescript import { createApiClient } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function createNeonDatabase( projectId: string, branchId: string, databaseName: string, databaseOwner: string ) { try { const response = await apiClient.createProjectBranchDatabase(projectId, branchId, { database: { name: databaseName, owner_name: databaseOwner, }, }); console.log('Database created:', response.data.database); } catch (error) { console.error('Error creating database:', error); throw error; } } // Example usage: In the project with ID "your-project-id", create a database named "mydatabase" in the branch with ID "your-branch-id" and owner "neondb_owner" createNeonDatabase('your-project-id', 'your-branch-id', 'mydatabase', 'neondb_owner').catch( (error) => { console.error('Error creating database:', error.message); } ); ``` - The `owner_name` parameter specifies the owner of the database. Ensure this role exists in the branch beforehand. - Branch & Project IDs: You can obtain these IDs from the [Neon Console](https://neon.com/docs/manage/branches#view-branches) or using SDK methods (e.g., [listProjectBranches](https://neon.com/docs/reference/typescript-sdk#list-branches), [listProjects](https://neon.com/docs/reference/typescript-sdk#list-projects)). ### Create a Role You can use the SDK to create a new Postgres role within a Neon branch. Here's an example of how to create a role: ```typescript import { createApiClient } from '@neondatabase/api-client'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function createNeonRole(projectId: string, branchId: string, roleName: string) { try { const response = await apiClient.createProjectBranchRole(projectId, branchId, { role: { name: roleName }, }); console.log('Role created:', response.data.role); } catch (error) { console.error('Error creating role:', error); throw error; } } // Example usage: In the project with ID "your-project-id", create a role named "new_user_role" in the branch with ID "your-branch-id" createNeonRole('your-project-id', 'your-branch-id', 'new_user_role').catch((error) => { console.error('Error creating role:', error.message); }); ``` #### Key points: - `role.name`: Specifies the name of the Postgres role to be created. - Branch & Project IDs: You can obtain these IDs from the [Neon Console](https://neon.com/docs/manage/branches#view-branches) or using SDK methods (e.g., [listProjectBranches](https://neon.com/docs/reference/typescript-sdk#list-branches), [listProjects](https://neon.com/docs/reference/typescript-sdk#list-projects)) ## TypeScript Types The Neon TypeScript SDK provides comprehensive type definitions for all request and response objects, enums, and interfaces. Leveraging these types enhances your development experience by enabling: - **Type Safety**: TypeScript types ensure that you are using the SDK methods and data structures correctly, catching type-related errors during development rather than at runtime. - **Improved Code Completion**: Modern IDEs and code editors utilize TypeScript types to provide intelligent code completion and suggestions, making it easier to discover and use SDK features. ### Utilizing SDK Types The `@neondatabase/api-client` package exports all the TypeScript types you need to interact with the Neon API in a type-safe manner. You can import these types directly into your TypeScript files. For example, when listing projects, you can use the `ProjectsResponse` type to explicitly define the structure of the API response: ```typescript import { createApiClient, ProjectsResponse } from '@neondatabase/api-client'; import { AxiosResponse } from 'axios'; const apiClient = createApiClient({ apiKey: process.env.NEON_API_KEY!, }); async function listNeonProjects(): Promise { try { const response: AxiosResponse = await apiClient.listProjects({}); const projects = response.data.projects; console.log('Projects:', projects); } catch (error) { console.error('Error listing projects:', error); } } listNeonProjects(); ``` In this example: - We import `ProjectsResponse` type from `@neondatabase/api-client`. - We explicitly type the `response` variable as `AxiosResponse`. This tells TypeScript that we expect the `apiClient.listProjects()` method to return a response from Axios, where the `data` property conforms to the structure defined by `ProjectsResponse`. Similarly, when creating a project, you can use types like `ProjectCreateRequest` for the request body and `ProjectResponse` for the expected response: By using TypeScript types, you ensure that your code interacts with the Neon API in a predictable and type-safe manner, reducing potential errors and improving code quality. You can explore all available types in the `@neondatabase/api-client` package to fully leverage the benefits of TypeScript in your Neon SDK integrations. ## Key SDK Method Signatures To give you a better overview of the SDK, here are some of the key methods available, categorized by their resource. For complete details and parameters for each method, please refer to the full [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api). ### Manage API keys - `listApiKeys()`: Retrieves a list of API keys for your account. - `createApiKey(data: ApiKeyCreateRequest)`: Creates a new API key. - `revokeApiKey(keyId: number)`: Revokes an existing API key. ### Manage projects - `listProjects(query?: ListProjectsParams)`: Retrieves a list of projects in your Neon account. - `listSharedProjects(query?: ListSharedProjectsParams)`: Retrieves a list of projects shared with your account. - `createProject(data: ProjectCreateRequest)`: Creates a new Neon project. - `getProject(projectId: string)`: Retrieves details for a specific project. - `updateProject(projectId: string, data: ProjectUpdateRequest)`: Updates settings for a specific project. - `deleteProject(projectId: string)`: Deletes a Neon project. - `listProjectOperations(projectId: string, query?: ListProjectOperationsParams)`: Retrieves operations for a project. - `getProjectOperation(projectId: string, operationId: string)`: Retrieves details for a specific operation. - `getConnectionUri(projectId: string, query: GetConnectionUriParams)`: Retrieves a connection URI for a project. - `listProjectPermissions(projectId: string)`: Retrieves project access permissions. - `grantPermissionToProject(projectId: string, data: GrantPermissionToProjectRequest)`: Grants project access to a user. - `revokePermissionFromProject(projectId: string, permissionId: string)`: Revokes project access from a user. - `getProjectJwks(projectId: string)`: Retrieves JWKS URLs for a project. - `addProjectJwks(projectId: string, data: AddProjectJWKSRequest)`: Adds a JWKS URL to a project. - `deleteProjectJwks(projectId: string, jwksId: string)`: Deletes a JWKS URL from a project. ### Manage branches - `listProjectBranches(projectId: string, query?: ListProjectBranchesParams)`: Retrieves a list of branches within a project. - `countProjectBranches(projectId: string, query?: CountProjectBranchesParams)`: Retrieves the number of branches in a project. - `createProjectBranch(projectId: string, data?: BranchCreateRequest)`: Creates a new branch within a project. - `getProjectBranch(projectId: string, branchId: string)`: Retrieves details for a specific branch. - `updateProjectBranch(projectId: string, branchId: string, data: BranchUpdateRequest)`: Updates settings for a specific branch. - `deleteProjectBranch(projectId: string, branchId: string)`: Deletes a branch from a project. - `restoreProjectBranch(projectId: string, branchId: string, data: BranchRestoreRequest)`: Restores a branch to a point in time. - `setDefaultProjectBranch(projectId: string, branchId: string)`: Sets a branch as the default for the project. - `getProjectBranchSchema(projectId: string, branchId: string, query?: GetProjectBranchSchemaParams)`: Retrieves the schema for a branch database. - `getProjectBranchSchemaComparison(projectId: string, branchId: string, query?: GetProjectBranchSchemaComparisonParams)`: Compares branch schemas. - `listProjectBranchEndpoints(projectId: string, branchId: string)`: Retrieves endpoints for a branch. - `listProjectBranchDatabases(projectId: string, branchId: string)`: Retrieves databases for a branch. - `createProjectBranchDatabase(projectId: string, branchId: string, data: DatabaseCreateRequest)`: Creates a database in a branch. - `getProjectBranchDatabase(projectId: string, branchId: string, databaseName: string)`: Retrieves details for a branch database. - `updateProjectBranchDatabase(projectId: string, branchId: string, databaseName: string, data: DatabaseUpdateRequest)`: Updates a branch database. - `deleteProjectBranchDatabase(projectId: string, branchId: string, databaseName: string)`: Deletes a database from a branch. - `listProjectBranchRoles(projectId: string, branchId: string)`: Retrieves roles for a branch. - `createProjectBranchRole(projectId: string, branchId: string, data: RoleCreateRequest)`: Creates a role in a branch. - `getProjectBranchRole(projectId: string, branchId: string, roleName: string)`: Retrieves details for a branch role. - `deleteProjectBranchRole(projectId: string, branchId: string, roleName: string)`: Deletes a role from a branch. - `resetProjectBranchRolePassword(projectId: string, branchId: string, roleName: string)`: Resets a branch role password. ### Manage Compute Endpoints - `listProjectEndpoints(projectId: string)`: Retrieves a list of endpoints within a project. - `createProjectEndpoint(projectId: string, data: EndpointCreateRequest)`: Creates a new endpoint within a project. - `getProjectEndpoint(projectId: string, endpointId: string)`: Retrieves details for a specific endpoint. - `updateProjectEndpoint(projectId: string, endpointId: string, data: EndpointUpdateRequest)`: Updates settings for a specific endpoint. - `deleteProjectEndpoint(projectId: string, endpointId: string)`: Deletes an endpoint from a project. - `startProjectEndpoint(projectId: string, endpointId: string)`: Starts an endpoint. - `suspendProjectEndpoint(projectId: string, endpointId: string)`: Suspends an endpoint. - `restartProjectEndpoint(projectId: string, endpointId: string)`: Restarts an endpoint. ### Retrieve Consumption Metrics - `getConsumptionHistoryPerAccount(query: GetConsumptionHistoryPerAccountParams)`: Retrieves account consumption metrics. - `getConsumptionHistoryPerProject(query: GetConsumptionHistoryPerProjectParams)`: Retrieves project consumption metrics. ### Manage Organizations - `getOrganization(orgId: string)`: Retrieves organization details. - `getOrganizationMembers(orgId: string)`: Retrieves members of an organization. - `getOrganizationMember(orgId: string, memberId: string)`: Retrieves details for a specific organization member. - `getOrganizationInvitations(orgId: string)`: Retrieves invitations for an organization. - `listOrgApiKeys(orgId: string)`: Lists API keys for an organization. - `createOrgApiKey(orgId: string, data: OrgApiKeyCreateRequest)`: Creates an API key for an organization. - `revokeOrgApiKey(orgId: string, keyId: number)`: Revokes an organization API key. - `createOrganizationInvitations(orgId: string, data: OrganizationInvitesCreateRequest)`: Creates organization invitations. - `updateOrganizationMember(orgId: string, memberId: string, data: OrganizationMemberUpdateRequest)`: Updates an organization member's role. - `removeOrganizationMember(orgId: string, memberId: string)`: Removes a member from an organization. - `transferProjectsFromOrgToOrg(sourceOrgId: string, data: TransferProjectsToOrganizationRequest)`: Transfers projects between organizations. - `listOrganizationVpcEndpoints(orgId: string, regionId: string)`: Lists VPC endpoints for an organization. - `getOrganizationVpcEndpointDetails(orgId: string, regionId: string, vpcEndpointId: string)`: Retrieves VPC endpoint details for an organization. - `assignOrganizationVpcEndpoint(orgId: string, regionId: string, vpcEndpointId: string, data: VPCEndpointAssignment)`: Assigns/updates a VPC endpoint for an organization. - `deleteOrganizationVpcEndpoint(orgId: string, regionId: string, vpcEndpointId: string)`: Deletes a VPC endpoint from an organization. ### Manage Users - `getCurrentUserInfo()`: Retrieves details for the current user. - `getCurrentUserOrganizations()`: Retrieves organizations for the current user. - `transferProjectsFromUserToOrg(data: TransferProjectsToOrganizationRequest)`: Transfers projects from a user to an organization. ### Regions - `getActiveRegions()`: Retrieves a list of active Neon regions. ### Manage Auth Integrations - `createProjectIdentityIntegration(data: IdentityCreateIntegrationRequest)`: Creates Neon Auth integration. - `createProjectIdentityAuthProviderSdkKeys(data: IdentityCreateAuthProviderSDKKeysRequest)`: Creates Auth Provider SDK keys. - `transferProjectIdentityAuthProviderProject(data: IdentityTransferAuthProviderProjectRequest)`: Transfers Neon-managed Auth project ownership. - `listProjectIdentityIntegrations(projectId: string)`: Lists Auth Provider integrations for a project. - `deleteProjectIdentityIntegration(projectId: string, authProvider: IdentitySupportedAuthProvider)`: Deletes an Auth Provider integration. ### General - `getProjectOperation(projectId: string, operationId: string)`: Retrieves details for a specific operation. ## Error Handling When working with APIs, handling errors gracefully is crucial for building robust applications. The Neon TypeScript SDK provides mechanisms to capture and inspect errors that may occur during API requests. ### General Error Structure When an error occurs during an API request, the SDK throws an `AxiosError` object, which extends the standard JavaScript `Error` object. The `AxiosError` object contains additional properties that provide details about the error, including: **`error.response`**: This property (if present) is an Axios response object containing details from the API error response. - **`error.response.status`**: The HTTP status code of the error response (e.g., 400, 401, 404, 500). - **`error.response.data`**: The response body, which, for Neon API errors, often follows a consistent structure, including an `error` object with `code` and `message` properties. ### Common Error Scenarios and Debugging - **Invalid API Key (401 Unauthorized):** Ensure your `NEON_API_KEY` environment variable is correctly set with a valid API key from the Neon Console. - **Project or Branch Not Found (404 Not Found):** Verify that the `projectId` and `branchId` values you are using are correct and that the resources exist in your Neon account. Double-check IDs in the Neon Console. - **Rate Limiting (429 Too Many Requests):** If you are making requests too frequently, the API might rate-limit you. Implement retry mechanisms with exponential backoff or reduce the frequency of your API calls. - **Request Body Validation Errors (400 Bad Request):** If you receive 400 errors, carefully review the request body you are sending, ensuring it conforms to the expected schema for the API endpoint. Refer to the [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api) for request body structures. ## References - [Neon API Reference](https://api-docs.neon.tech/reference/getting-started-with-neon-api): Comprehensive documentation for the Neon API, including detailed descriptions of resources, endpoints, request/response structures, and error codes. --- # Source: https://neon.com/llms/security-acceptable-use-policy.txt # Acceptable Use Policy > The Acceptable Use Policy document outlines the rules and guidelines for using Neon's services, detailing prohibited activities and ensuring compliance with legal and security standards. ## Source - [Acceptable Use Policy HTML](https://neon.com/docs/security/acceptable-use-policy): The original HTML version of this documentation **Last Updated:** 23 January 2024 ## Overview Neon ("Neon," "we," "us," or "our") is committed to providing a secure and productive computing environment. This Acceptable Use Policy ("AUP") outlines the acceptable use of our Platform and Services. By accessing and using Neon's Platform and Services, you agree to comply with this policy. Unless otherwise provided herein, capitalized terms will have the meaning specified in the applicable Terms of Service, Master Service Agreement, or any other agreed terms ("Agreement"). ## Acceptable Use ### General Guidelines - **Lawful Use:** Customers and Authorized Users, hereafter "Users," must use Neon's resources in compliance with all applicable laws and regulations. - **Ethical Use:** Users are expected to act ethically and responsibly, respecting the rights of others and the integrity of Neon's resources. - **Security:** Users must take all reasonable steps to ensure the security of Neon's resources, including but not limited to using strong passwords and promptly reporting any security incidents. ### Prohibited Activities The following activities are strictly prohibited: - **Unauthorized Access:** Users are prohibited from attempting to gain unauthorized access to Neon's serverless Postgres instances, data, or any other resources. - **Malicious Activities:** Any activities that could be deemed malicious, including but not limited to hacking, phishing, or deploying malware, are strictly prohibited. - **Abuse of Resources:** Users should not engage in activities that lead to excessive consumption of Neon's resources, disrupting the service for other users. This includes intentional or unintentional denial-of-service attacks. - **Data Breach Prevention:** Users are responsible for implementing adequate security measures to prevent data breaches. Any actions compromising the security of data stored in Neon are strictly prohibited. Unauthorized sharing of credentials, including but not limited to usernames and passwords, is strictly forbidden. - **Unauthorized Modifications:** Unauthorized modifications to Neon's infrastructure, configurations, or any other settings are prohibited. This includes attempts to alter serverless configurations or storage settings without proper authorization. - **Illegal Content:** Users must not store or transmit any illegal content through Neon. This includes but is not limited to copyrighted material without proper authorization, child pornography, or any content that violates applicable laws. - **Bulk Email and Spam:** Users are prohibited from using Neon's services for the purpose of sending bulk emails or engaging in spam activities. This includes the use of Neon's resources for email campaigns without proper authorization. - **Violations of Privacy:** Users must respect the privacy of others and should not engage in activities that violate the privacy of Neon's users or any third parties. - **Network Interference:** Users are not allowed to interfere with the normal operation of Neon's network infrastructure, including attempting to bypass security measures or manipulating network protocols. - **Insecure Development Practices:** Users are expected to follow secure development practices when utilizing Neon's services, and any insecure coding practices that could compromise the integrity of the service are prohibited. - **Creating Multiple Accounts:** Avoid creating multiple accounts, as this can result in an account block due to misuse of free-plan resources. ## Enforcement Violations of this AUP may result in, including but not limited to account suspension or termination in accordance with the applicable Agreement and reporting to law enforcement authorities. Neon reserves the right to modify this AUP at any time without notice. ## Reporting Violations Users who become aware of any violations of this AUP are encouraged to report them to [security@neon.tech](mailto:security@neon.tech). ## Conclusion You agree to abide by this Acceptable Use Policy by using Neon's resources. Your compliance helps us maintain a secure and productive environment for everyone. Thank you for your cooperation. --- # Source: https://neon.com/llms/security-ai-use-in-neon.txt # AI use in Neon > The document outlines the integration and application of AI technologies within Neon's infrastructure, detailing security measures and protocols to ensure safe and efficient AI utilization. ## Source - [AI use in Neon HTML](https://neon.com/docs/security/ai-use-in-neon): The original HTML version of this documentation Neon integrates AI to enhance user experience across different parts of the platform. Below is an overview of where and how AI is used in Neon. ## AI in the Neon SQL Editor The Neon SQL Editor includes AI-powered features to assist with writing, optimizing, and generating names for SQL queries. To enable these capabilities, we share your database schema with the AI agent, but **no actual data is shared**. Neon currently uses [Amazon Bedrock](https://aws.amazon.com/bedrock/) as the LLM provider for the Neon SQL Editor. All requests are processed within AWS's secure infrastructure, where other Neon resources are also managed. For more details, see [AI features in the Neon SQL Editor](https://neon.com/docs/get-started/query-with-neon-sql-editor#ai-features). ## AI chat assistance Neon provides AI-powered chat assistance across multiple platforms to help users with documentation, troubleshooting, and best practices. These AI chat assistants are developed by third-party companies under contract with Neon. Neon AI chat assistance is built on publicly available sources, including Neon documentation, public 3rd party vendor documentation, Neon GitHub repositories, the Neon public OpenAPI specification, and other publicly available content. It does not process or incorporate personally identifiable information (PII) or private user data. For details on where to access Neon AI chat assistants, see [Neon AI chat assistance](https://neon.com/docs/introduction/support#neon-ai-chat-assistance). ## Questions about AI use in Neon? If you have questions about Neon's AI integrations, please reach out to [Neon Support](https://console.neon.tech/app/projects?modal=support). --- # Source: https://neon.com/llms/security-compliance.txt # Compliance > The "Compliance" document outlines Neon's adherence to industry standards and regulations, detailing its security certifications and compliance measures to ensure data protection and privacy for its users. ## Source - [Compliance HTML](https://neon.com/docs/security/compliance): The original HTML version of this documentation At Neon, we prioritize data security and privacy, and we have achieved several key compliances that validate our efforts. We have completed audits for SOC 2 Type 1 and Type 2, SOC 3, ISO 27001, and ISO 27701, and we adhere to GDPR and CCPA regulations. ## SOC 2 We have successfully attained SOC 2 Type 1 and Type 2 compliance. These compliances, validated by independent auditors, confirm that our systems adhere to the American Institute of Certified Public Accountants (AICPA) trust service criteria for security, availability, processing integrity, confidentiality, and privacy. ## SOC 3 The SOC 3 report is a public-facing version of the SOC 2 report, providing assurance to external parties about our system's ability to meet the trust service criteria without disclosing sensitive details. If available on your plan, you can request the report through our [Trust Center](https://trust.neon.com/). ## ISO 27001 ISO 27001 is an internationally recognized standard for information security management systems (ISMS). Our compliance with this standard demonstrates that we follow a systematic and risk-based approach to managing sensitive information, ensuring its security. ## ISO 27701 ISO 27701 extends ISO 27001 to include data privacy requirements, helping organizations establish, implement, and maintain a privacy information management system (PIMS) in accordance with GDPR and other privacy laws. ## GDPR The General Data Protection Regulation (GDPR) is the European Union's regulation designed to protect individuals' privacy and personal data. Neon adheres to GDPR requirements, ensuring the rights and data privacy of our users across the EU. ## CCPA The California Consumer Privacy Act (CCPA) grants California residents new rights regarding their personal data. Neon is committed to complying with CCPA, ensuring transparency and control for users over their personal information. ## HIPAA Neon offers HIPAA compliance as part of our Scale plan, enabling applications that handle Protected Health Information (PHI) to meet compliance requirements. more information and how to get started, refer to our [Neon HIPAA Compliance Guide](https://neon.com/docs/security/hipaa). A copy of Neon's HIPAA compliance report can be requested through our [Trust Center](https://trust.neon.com/). ## Questions? To learn more about how we protect your data and uphold the highest standards of security and privacy, please visit our [Trust Center](https://trust.neon.com/), where you can also request and download audit reports. - For security inquiries, contact us at [security@neon.tech](mailto:security@neon.tech). - For privacy-related questions, reach out to [privacy@neon.tech](mailto:privacy@neon.tech). - For sales information, please [contact our sales team](https://neon.com/contact-sales). --- # Source: https://neon.com/llms/security-hipaa.txt # HIPAA Compliance > The document outlines Neon's adherence to HIPAA regulations, detailing security measures and protocols to ensure the protection of health information within its database services. ## Source - [HIPAA Compliance HTML](https://neon.com/docs/security/hipaa): The original HTML version of this documentation Neon offers HIPAA compliance as a self-serve feature available to customers on the [Scale](https://neon.com/docs/introduction/plans) plan. **Note**: HIPAA support is currently available at no additional cost. Once billing is finalized, HIPAA support will add a 15% surcharge to your monthly invoice. We'll notify you in advance before this change takes effect. We take the security and privacy of health information seriously. This guide explains how Neon supports HIPAA compliance and what it means for you as a customer. HIPAA features are available to customers who have accepted our Business Associate Agreement (BAA) through the self-serve enablement process. The BAA outlines our responsibilities for protecting Protected Health Information (PHI) and ensuring HIPAA compliance. ## What is HIPAA? HIPAA is a federal law that sets national standards for the protection of health information. It requires businesses handling PHI to implement safeguards to ensure privacy and security. ## Enable HIPAA HIPAA compliance is available as a self-serve feature on supported plans. To enable HIPAA support, follow these steps: 1. **Enable HIPAA for your Organization**: First, you must enable HIPAA compliance at the organization level and accept the Business Associate Agreement (BAA). 2. **Enable HIPAA for your projects**: After HIPAA is enabled for your organization, you can create HIPAA-compliant projects or enable HIPAA for existing projects. ### Step 1: Enable HIPAA for your Organization To enable HIPAA compliance for your organization: 1. In the Neon Console, navigate to your **Organization settings**. 2. Locate the **HIPAA support** section. 3. Enable HIPAA compliance for your organization. 4. Read and accept the Business Associate Agreement (BAA). Once HIPAA is enabled for your organization, you can proceed to enable HIPAA compliance for your projects. ### Step 2: Enable HIPAA for your projects **Important**: Once HIPAA compliance is enabled on a project, it cannot be disabled. Enabling HIPAA will also restart all computes, temporarily interrupting database connections. **Note**: HIPAA is not yet supported for Postgres 18. You cannot create a Postgres 18 project in a HIPAA-enabled Neon organization. Tab: New project For Neon project creation steps, see [Create a project](https://neon.com/docs/manage/projects#create-a-project). When you create a project, select the **Enable HIPAA compliance for this project** checkbox on the **Create Project** form. This option is available after HIPAA has been enabled for your organization. Tab: Existing project To enable HIPAA compliance for an existing Neon project: 1. In the Neon Console, navigate to your project's **Settings** page. 2. Locate the **HIPAA support** section. 3. Click **Enable**. Tab: API To create a new HIPAA-compliant Neon project via the Neon API, set `audit_log_level` to `hipaa` in the `project settings` object, as shown below. ```bash curl --request POST \ --url https://console.neon.tech/api/v2/projects \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "settings": { "hipaa": true }, "pg_version": 17 } } ' ``` To enable HIPAA for an existing project, set `hippa` to `true` in the `project settings` object using the [Update project API](https://api-docs.neon.tech/reference/updateproject): ```bash curl --request PATCH \ --url https://console.neon.tech/api/v2/projects/YOUR_PROJECT_ID \ --header 'accept: application/json' \ --header 'authorization: Bearer $NEON_API_KEY' \ --header 'content-type: application/json' \ --data ' { "project": { "settings": { "hipaa": true } } } ' ``` **Important**: Enabling HIPAA on an existing project will force a restart of all computes to apply the new setting. This will temporarily interrupt database connections. Tab: CLI To create a new HIPAA-compliant Neon project via the [Neon CLI](https://neon.com/docs/reference/neon-cli), use the `--hipaa` option with the `neon projects create` command, as shown below. ```bash neon projects create --hipaa ``` To enable HIPAA for an existing project, use the `--hipaa` option with the `neon projects update` command, as shown below: ```bash neon projects update my-project --hipaa ``` **Important**: Enabling HIPAA on an existing project will force a restart of all computes to apply the new setting. This will temporarily interrupt database connections. If you have trouble enabling HIPAA, contact `hipaa@neon.tech`. **Note**: For information about disabling HIPAA compliance, see [Disabling HIPAA](https://neon.com/docs/security/hipaa#disabling-hipaa). ## Key HIPAA terms - Protected Health Information (PHI): Any identifiable health-related data. - Covered Entity: Healthcare providers, plans, or clearinghouses that handle PHI. - Business Associate: A service provider (like Neon) that handles PHI on behalf of a Covered Entity. - Breach: Unauthorized access, use, or disclosure of PHI. - Security Rule: Safeguards to protect electronic PHI. - Privacy Rule: Rules governing how PHI is used and disclosed. ## How Neon protects your data 1. Use and disclosure of PHI - We only use PHI to provide our agreed-upon services and to meet legal obligations. - PHI is disclosed only as required by law or with proper authorization. 2. Safeguards - Administrative: Policies and training to ensure compliance. - Physical: Secure access controls to data storage areas. - Technical: Encryption and access controls for electronic PHI. 3. Incident reporting - We promptly report any unauthorized use or disclosure of PHI. - Breach notifications are provided within 30 days as per HIPAA requirements. 4. Subcontractors and agents - Any third parties we work with are required to adhere to the same data protection standards. - We provide transparency by listing our subcontractors at [https://neon.com/hipaa-contractors](https://neon.com/hipaa-contractors) and notifying customers of any changes if you sign up to notifications [here](https://share-eu1.hsforms.com/1XjUD9QeKQw-RSAgQ...). 5. Customer responsibilities - Customers must ensure that PHI is only stored in data rows as intended for sensitive data and should never be included in metadata, column names, table names, schema descriptions, or system-generated logs such as audit trails, query logs, or error logs. - Customers have the responsibility to configure a session timeout. - Customers need to avoid including PHI in support tickets or metadata fields. 6. PHI access and amendments - Customers can request access to audit logs by contacting `hipaa@neon.tech`. - Any updates or corrections to PHI need to be carried out by the customer. ## Your rights and what to expect - Transparency: You can request details about how your PHI is being used. - Security: Our technical safeguards are designed to prevent unauthorized access. - Data Control: You retain ownership of your data; we are custodians ensuring its protection. ## Availability of audit events Audit events may not be logged if database endpoints experience exceptionally heavy load, as we prioritize database availability over capturing log events. ## Logged events Neon maintains a comprehensive audit trail to support HIPAA compliance. This includes the following categories of logged events: 1. [Neon Console and API audit logs](https://neon.com/docs/security/hipaa#neon-console-and-api-audit-logs): Captures user actions in the Neon Console and via the Neon API. 2. [Postgres audit logs](https://neon.com/docs/security/hipaa#postgres-audit-logs-pgaudit): Logged using the [pgAudit](https://www.pgaudit.org/) extension (`pgaudit`) for Postgres. > Self-serve access to HIPAA audit logs is currently not supported. Access to audit logs can be requested by contacting `hipaa@neon.tech`. ### Neon console and API audit logs Neon logs operations performed via the Neon Console interface and the Neon API. Examples of logged operations may include these, among other operations: - **Project management**: creating, deleting, listing projects - **Branch management**: creating, deleting, listing branches - **Compute management**: starting and stopping of compute instances - **Database and role management**: creating or deleting databases and roles To protect sensitive information, Neon filters data in audit logs using the following approach: - Sensitive fields (such as `connection_uri` and `password`) are excluded from logs whereever possible. - `GET` requests: Only query parameters are logged; response payloads are not recorded. - Mutation requests (`PATCH`, `PUT`, `POST`, `DELETE`): Request and response bodies are logged with sensitive fields redacted. #### Neon console and API audit log example The following example shows how a `List project branches` operation is captured in Neon's audit logs. The table provides a description of the log record's parts. **Audit log record:** ```ini fb7c2e2f-cb09-4405-b543-dbe1b88614b6 2025-05-25 10:18:45.340 +0000 `{ "changes": [], "sync_id": 57949 }` e640c32c-0387-4fc2-8ca5-f823f7ebc4b6 GET `{}` /projects/misty-breeze-49601234/branches a92b3088-7f92-4871-bf91-0aac64edc4b6 b8c58a4b-0a33-4d54-987e-4155e95a64b6 2025-05-24 15:42:39.088 +0000 misty-breeze-49601234 keycloak 200 `{}` ListProjectBranches 0 ``` **Field descriptions:** | **Field position** | **Example value** | **Description** | | ------------------ | ---------------------------------------- | -------------------------------------------------------- | | 1 | fb7c2e2f-cb09-4405-b543-dbe1b88614b6 | Unique ID for the raw log event | | 2 | 2025-05-25 10:18:45.340 +0000 | Timestamp when Airbyte extracted the record | | 3 | `{ "changes": [], "sync_id": 57949 }` | Metadata from the ingestion tool | | 4 | e640c32c-0387-4fc2-8ca5-f823f7ebc4b6 | Unique identifier for the API event | | 5 | GET | HTTP method used in the request | | 6 | `{}` | Request body payload (if present) | | 7 | | Reserved for future metadata fields (empty in this case) | | 8 | /projects/misty-breeze-49601234/branches | URL path of the API call | | 9 | a92b3088-7f92-4871-bf91-0aac64edc4b6 | Internal ID for the response object | | 10 | b8c58a4b-0a33-4d54-987e-4155e95a64b6 | Internal ID representing the auth/session context | | 11 | 2025-05-24 15:42:39.088 +0000 | Actual time when the API call was made | | 12 | misty-breeze-49601234 | Project identifier targeted by the API call | | 13 | keycloak | Authentication mechanism used | | 14 | 200 | HTTP status code of the response | | 15 | `{}` | Resource identifiers returned (if any) | | 16 | ListProjectBranches | Operation name associated with the endpoint | | 17 | 0 | Internal sync batch identifier | ### Postgres audit logs (pgAudit) When HIPAA audit logging is enabled for a Neon project, Neon configures pgAudit with the following settings by default: | Setting | Value | Description | | ---------------------------- | ------------ | --------------------------------------------------------------------------------------------- | | `pgaudit.log` | `all, -misc` | Logs all classes of SQL statements except low-risk miscellaneous commands. | | `pgaudit.log_parameter` | `off` | Parameters passed to SQL statements are not logged to avoid capturing sensitive values. | | `pgaudit.log_catalog` | `off` | Queries on system catalog tables (e.g., `pg_catalog`) are excluded from logs to reduce noise. | | `pgaudit.log_statement` | `on` | The full SQL statement text is included in the log. | | `pgaudit.log_relation` | `off` | Only a single log entry is generated per statement, not per table or view. | | `pgaudit.log_statement_once` | `off` | SQL statements are logged with every entry, not just once per session. | #### What does `pgaudit.log = 'all, -misc'` include? This configuration enables logging for all major classes of SQL activity while excluding less relevant statements in the `misc` category. Specifically, it includes: - **READ**: `SELECT` statements and `COPY` commands that read from tables or views. - **WRITE**: `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, and `COPY` commands that write to tables. - **FUNCTION**: Function calls and `DO` blocks. - **ROLE**: Role and permission changes, including `GRANT`, `REVOKE`, `CREATE ROLE`, `ALTER ROLE`, and `DROP ROLE`. - **DDL**: Schema and object changes like `CREATE TABLE`, `ALTER INDEX`, `DROP VIEW` — all DDL operations not included in the `ROLE` class. - **MISC_SET**: Miscellaneous `SET` commands, e.g. `SET ROLE`. Excluded: - **MISC**: Low-impact commands such as `DISCARD`, `FETCH`, `CHECKPOINT`, `VACUUM`, and `SET`. **Note**: In some cases, audit logs may include SQL statements that contain plain-text passwords—for example, in a `CREATE ROLE ... LOGIN PASSWORD` command. This is due to limitations in the Postgres `pgaudit` extension, which may log full statements without redacting sensitive values. This behavior is a known issue. We recommend avoiding the inclusion of raw credentials in SQL statements where possible. For more details, see the [pgAudit documentation](https://github.com/pgaudit/pgaudit). #### Audit log storage and forwarding - Logs are written using the standard [PostgreSQL logging facility](https://www.postgresql.org/docs/current/runtime-config-logging.html). - Logs are sent to a dedicated Neon audit collector endpoint and securely stored. - Each log entry includes metadata such as the timestamp of the activity, the Neon compute ID (`endpoint_id`), Neon project ID (`project_id`), the Postgres role, the database accessed, and the method of access (e.g.,`neon-internal-sql-editor`), etc. See the following log record example and field descriptions: #### Postgres audit log example The following example shows how a simple SQL command—`CREATE SCHEMA IF NOT EXISTS healthcare`—is captured in Neon's audit logs. The table provides a description of the log record's parts. **Query:** `CREATE SCHEMA IF NOT EXISTS healthcare;` **Audit log record:** ```ini 2025-05-05 20:23:01.277 <134>May 6 00:23:01 vm-compute-shy-waterfall-w2cn1o3t-b6vmn young-recipe-29421150/ep-calm-da 2025-05-06 00:23:01.277 GMT,neondb_owner,neondb,1405,10.6.42.155:13702,68195665.57d,1,CREATE SCHEMA, 2025-05-06 00:23:01 GMT,16/2,767,00000,SESSION,1,1,DDL,CREATE SCHEMA,,,CREATE SCHEMA IF NOT EXISTS healthcare,,,,,,,,,,neon-internal-sql-editor ``` **Field descriptions:** | **Field position** | **Example value** | **Description** | | ------------------ | --------------------------------------- | --------------------------------------------------------------------------------- | | 1 | 2025-05-05 20:23:01.277 | Timestamp when the log was received by the logging system. | | 2 | `<134>` | Syslog priority code (facility + severity). | | 3 | May 6 00:23:01 | Syslog timestamp (when the message was generated on the source host). | | 4 | vm-compute-shy-waterfall-w2cn1o3t-b6vmn | Hostname or compute instance where the event occurred. | | 5 | young-recipe-29421150/ep-calm-da | Project and endpoint name in the format `/`. | | 6 | 2025-05-06 00:23:01.277 GMT | Timestamp of the database event in UTC. | | 7 | neondb_owner | Database role (user) that executed the statement. | | 8 | neondb | Database name. | | 9 | 1405 | Process ID (PID) of the PostgreSQL backend. | | 10 | 10.6.42.155:13702 | Client IP address and port that issued the query. | | 11 | 68195665.57d | PostgreSQL virtual transaction ID. | | 12 | 1 | Backend process number. | | 13 | CREATE SCHEMA | Command tag. | | 14 | 2025-05-06 00:23:01 GMT | Statement start timestamp. | | 15 | 16/2 | Log sequence number (LSN). | | 16 | 767 | Statement duration in milliseconds. | | 17 | 00000 | SQLSTATE error code (00000 = success). | | 18 | SESSION | Log message level. | | 19 | 1 | Session ID. | | 20 | 1 | Subsession or transaction ID. | | 21 | DDL | Statement type: Data Definition Language. | | 22 | CREATE SCHEMA | Statement tag/type. | | 23–26 | _(empty)_ | Reserved/unused fields. | | 27 | CREATE SCHEMA IF NOT EXISTS healthcare | Full SQL text of the statement. | | 28 | `` | Parameter values (redacted or disabled by settings like `pgaudit.log_parameter`). | | 29–35 | _(empty)_ | Reserved/unused fields. | | 36 | neon-internal-sql-editor | Application name or source of the query (e.g., SQL Editor in the Neon Console). | #### Extension configuration The `pgaudit` extension is preloaded on HIPAA-enabled Neon projects. For extension version information, see [Supported Postgres extensions](https://neon.com/docs/extensions/pg-extensions). ## Non-HIPAA-compliant features The following features are not currently HIPAA-compliant and should not be used in projects containing HIPAA-protected data: - [Neon Auth](https://neon.com/docs/neon-auth/overview) – Uses an authentication provider that is not covered under Neon's HIPAA compliance. - [Data API](https://neon.com/docs/data-api/get-started) – Hosted outside Neon's HIPAA-compliant infrastructure. For updates on HIPAA support for these features, contact [hipaa@neon.tech](mailto:hipaa@neon.tech). ## Security incidents If a security breach occurs, Neon will: 1. Notify you within five business days of becoming aware of the incident. 2. Provide detailed information about the breach. 3. Take corrective actions to prevent future occurrences. ## Disabling HIPAA Once HIPAA compliance is enabled for a Neon project, it cannot be disabled. If you want to disable HIPAA for your Neon organization entirely, you need to [submit a support request](https://console.neon.tech/app/projects?modal=support). This can only be done after all HIPAA-enabled projects have been deleted. To delete a HIPAA-compliant project, submit a [support request](https://console.neon.tech/app/projects?modal=support). Before deleting a HIPAA project, make sure to export any audit logs or data you may need. Neon retains audit logs for the duration specified in your Business Associate Agreement (BAA). ## Frequently Asked Questions **Q: Can I request Neon to delete my PHI?** A: Yes, upon termination of services, we will securely delete or return your PHI. **Q: How does Neon ensure compliance with HIPAA?** A: We conduct regular internal audits and provide training to our employees to ensure adherence to HIPAA requirements. **Q: What should I do if I suspect a data breach?** A: Contact our security team immediately at security@neon.tech. ## Contact information For any questions regarding our HIPAA compliance or to report an issue, please reach out to `hipaa@neon.tech`. _This guide provides a high-level overview of Neon's HIPAA compliance efforts. For more details, please refer to your Business Associate Agreement (BAA) or contact us directly via our [support channels](https://neon.com/docs/introduction/support)._ --- # Source: https://neon.com/llms/security-security-overview.txt # Security overview > The "Security overview" document outlines Neon's security measures, including data encryption, access controls, and compliance standards, ensuring the protection and integrity of user data within the Neon database platform. ## Source - [Security overview HTML](https://neon.com/docs/security/security-overview): The original HTML version of this documentation At Neon, security is our highest priority. We are committed to implementing best practices and earning the trust of our users. A key aspect of earning this trust is by ensuring that every touchpoint in our system, from connections, to data storage, to our internal processes, adheres to the highest security standards. ## Secure connections Neon supports a variety of protections related to database connections: - **SSL/TLS encryption** — Neon requires that all connections use SSL/TLS encryption to ensure that data sent over the Internet cannot be viewed or manipulated by third parties. Neon supports the `verify-full` SSL mode for client connections, which is the strictest SSL mode provided by PostgreSQL. When set to `verify-full`, a PostgreSQL client verifies that the server's certificate is issued by a trusted certificate authority (CA), and that the server host name matches the name stored in the certificate. This helps prevent man-in-the-middle attacks. For information about configuring `verify-full` SSL mode for your connections, see [Connect securely](https://neon.com/docs/connect/connect-securely). - **Secure password enforcement** — Neon requires a 60-bit entropy password for all Postgres roles. This degree of entropy ensures that passwords have a high level of randomness. Assuming a perfect distribution of choices for every bit of entropy, a password with 60 bits of entropy has 2^60 (or about 1.15 quintillion) possible combinations, which makes it computationally infeasible for attackers to guess the password through brute-force methods. For Postgres roles created via the Neon Console, API, and CLI, passwords are generated with 60-bit entropy. For Postgres roles created via SQL, user-defined passwords are validated at creation time to ensure 60-bit entropy. - **The Neon Proxy** — Neon places a proxy in front of your database, which helps safeguard it from unauthorized login attempts. For example, in Postgres, each login attempt spawns a new process, which can pose a security risk. The [Neon Proxy](https://neon.com/docs/reference/glossary#neon-proxy) mitigates this by monitoring connection attempts and preventing misuse. The Neon Proxy also allows us to authenticate connections before they ever reach your Postgres database. - **IP Allow** — For additional connection security, the Neon Scale plan offers [IP allowlist support](https://neon.com/docs/security/security-overview#ip-allowlist-support), which lets you to limit access to trusted IPs. - **Private Networking** — This feature enables connections to your Neon databases via AWS PrivateLink, bypassing the open internet entirely. See [Private Networking](https://neon.com/docs/guides/neon-private-networking). ## IP allowlist support Neon's [IP Allow](https://neon.com/docs/introduction/ip-allow) feature ensures that only trusted IP addresses can connect to the project where your database resides, preventing unauthorized access and helping maintain overall data security. You can limit access to individual IP addresses, IP ranges, or IP addresses and ranges defined with [CIDR notation](https://neon.com/docs/reference/glossary#cidr-notation). To learn more, see [Configure IP Allow](https://neon.com/docs/manage/projects#configure-ip-allow). ## Protected branches You can designate any branch as a "protected branch", which implements a series of protections: - Protected branches cannot be deleted. - Protected branches cannot be [reset](https://neon.com/docs/manage/branches#reset-a-branch-from-parent). - Projects with protected branches cannot be deleted. - Computes associated with a protected branch cannot be deleted. - New passwords are automatically generated for Postgres roles on branches created from protected branches. - With additional configuration steps, you can apply IP Allow restrictions to protected branches only. - Protected branches are not [archived](https://neon.com/docs/guides/branch-archiving) due to inactivity. The protected branches feature is available on all Neon paid plans. Typically, the protected branch status is given to a branch or branches that hold production data or sensitive data. For information about how to configure a protected branch, refer to our [Protected branches guide](https://neon.com/docs/guides/protected-branches). ## Private Networking The [Neon Private Networking](https://neon.com/docs/guides/neon-private-networking) feature enables secure connections to your Neon databases via [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html), bypassing the open internet for enhanced security. This feature is available to Neon [Organization](https://neon.com/docs/manage/organizations) accounts. It's not accessible to Personal Neon accounts. ## Data-at-rest encryption Data-at-rest encryption is a method of storing inactive data that converts plaintext data into a coded form or cipher text, making it unreadable without an encryption key. Neon stores inactive data in [NVMe SSD volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html#nvme-ssd-volumes). The data on NVMe instance storage is encrypted using an `AES-256` block cipher implemented in a hardware module on the instance. ## Secure data centers Neon's infrastructure is hosted and managed within either Amazon's or Azure's secure data centers, depending on the cloud service provider you select when setting up your project. Amazon's secure data centers backed by [AWS Cloud Security](https://aws.amazon.com/security/). Amazon continually manages risk and undergoes recurring assessments to ensure compliance with industry standards. For information about AWS data center compliance programs, refer to [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/). The Microsoft cloud data centers that power Azure focus on high reliability, operational excellence, cost-effectiveness, and a trustworthy online experience for Microsoft customers and partners worldwide. Microsoft regularly tests data center security through both internal and third-party audits. To learn more, refer to [Microsoft's Datacenter security overview](https://learn.microsoft.com/en-us/compliance/assurance/assurance-datacenter-security). ## Compliance-relevant security measures At Neon, we implement robust technical controls to secure customer and sensitive data in alignment with SOC2, ISO27001, ISO27701 standards and GDPR and CCPA regulations. To learn more about these standards and regulations, see [Compliance](https://neon.com/docs/security/compliance). All systems are hosted on AWS and Azure, where we have implemented specific security measures to protect data. Below is a detailed breakdown of these compliance-relevant security measures for access control, encryption, network security, event logging, vulnerability management, backups, data deletion and retention: - **Customer and Sensitive Data Encryption (AWS KMS and Azure Key Vault)** All customer and sensitive data is encrypted using AES-256 encryption at rest. For data in transit, encryption is enforced using TLS 1.2/1.3 protocols across various services. Encryption keys are managed using AWS Key Management Service (KMS) and Azure Key Vault with key rotation policies in place. Only services and users with specific IAM roles can access the decryption keys, and all access is logged via AWS CloudTrail and Azure Monitor for auditing and compliance purposes. - **Fine-Grained Access Control via IAM** Access to PII and customer or sensitive data is controlled through AWS Identity and Access Management (IAM) policies and Microsoft Entra ID permissions. Broad access is limited to the infrastructure and security teams, while other roles operate under least-privilege principles. When additional access needed, access requests to production systems are received via Teleport, where all sessions are recorded. Only managers and on-call personnel are permitted to access production or approve production access requests. Additionally, all infrastructure is managed through Terraform, ensuring that any changes to access controls or resources are fully auditable and version-controlled. Regular access reviews and audits are conducted to verify that access rights remain aligned with security best practices. - **Data Segmentation and Isolation Using VPCs and Security Groups** In our AWS and Azure environments, workloads are segmented using Virtual Private Clouds (VPCs) and Azure Virtual Networks (VNets) to separate sensitive data environments from other systems. We control network access between services by using security groups, Network Access Control Lists (NACLs) and Azure Network Security Groups (NSGs), restricting access to only the necessary traffic. This ensures a clear separation of environments, minimizing the risk of unauthorized access or lateral movement between services. - **Event-Based Data Anomaly Detection via AWS GuardDuty and Logz.io** Customer data access attempts and other anomalies are continuously monitored via Logzio integration on both infrastructures. All alerts are ingested into our Logz.io SIEM for centralized logging, analysis, and correlation with other security data. This allows our Security Operations Center (SOC) to quickly detect, investigate, and respond to potential security threats. - **Data Access Logging and Auditing (AWS CloudTrail & Logz.io)** All data access actions, including those involving sensitive operations, are logged using AWS CloudTrail and Azure Monitor, and forwarded to Logz.io for centralized logging and analysis. This provides full traceability of who accessed which resources, when, and from where. Logs are secured and retained for audit purposes, while any anomalies or suspicious activity trigger real-time alerts through our Security Operations Center (SOC) for immediate investigation and response. - **PII Backup, Retention, and Deletion Policies with S3 Versioning** Customer data backups are stored in cloud object storage, such as Amazon S3 and Azure Blob Storage, with versioning enabled, allowing recovery from accidental deletions or modifications. Data is encrypted using server-side encryption (SSE) and is retained for 30 days. Data deletion is followed to ensure compliance with SOC2, ISO, GDPR and CCPA requirements, including data subject requests. - **Vulnerability Management with Orca and Oligo** Our vulnerability management program, integrated with Orca and Oligo, continuously scans all AWS and Azure environments for security issues, including misconfigurations, unpatched software, and exposed credentials. We leverage tagging to classify certain data types, enabling focused monitoring and scanning based on the sensitivity of the data. Automated alerts allow us to address vulnerabilities before they pose a risk to PII or other sensitive information. The vulnerabilities are remediated according to the defined SLAs to reduce the risk. - **Annual Audits and Continuous Penetration Testing** We undergo annual audits for SOC2 and ISO by two independent firms to verify the integrity and security of our systems. In addition, bi-annual penetration tests with Hackerone are performed, with results feeding into our vulnerability management program. The vulnerabilities are remediated according to the defined SLAs to reduce the risk. To learn more about how we protect your data and uphold the highest standards of security and privacy, please visit our [Trust Center](https://trust.neon.com/). ## GitHub secret scanning Neon is a [GitHub Secret Scanning Partner](https://docs.github.com/en/code-security/secret-scanning/secret-scanning-partner-program). If a Neon database credential or API key is detected in a GitHub repository, GitHub alerts Neon through an automated system. This system validates the credential and notifies our security team. By integrating with GitHub Secret Scanning, Neon helps users quickly identify and mitigate exposed credentials, reducing the risk of unauthorized access. To avoid leaking secrets, follow these security best practices: - Use environment variables instead of hardcoding credentials. - Store sensitive information in secret management tools like AWS Secrets Manager or HashiCorp Vault. - Regularly rotate database credentials and API keys. If you have questions about this integration or need help securing your credentials, contact us at `security@neon.tech`. ## Security reporting Neon adheres to the [securitytxt.org](https://securitytxt.org/) standard for transparent and efficient security reporting. For details on how to report potential vulnerabilities, please visit our [Security reporting](https://neon.com/docs/security/security-reporting) page or refer to our [security.txt](https://neon.com/security.txt) file. Neon also has a [private bug bounty program with Hackerone](https://neon.com/docs/security/security-reporting#bug-bounty-program-with-hackerone). ## Questions about our security measures? If you have any questions about our security protocols or would like a deeper dive into any aspect, our team is here to help. You can reach us at [security@neon.tech](mailto:security@neon.tech). --- # Source: https://neon.com/llms/security-security-reporting.txt # Security reporting > The "Security Reporting" document outlines the procedures for reporting security vulnerabilities in Neon, detailing the contact methods and response expectations for users to ensure prompt and effective handling of security issues. ## Source - [Security reporting HTML](https://neon.com/docs/security/security-reporting): The original HTML version of this documentation We have established the following security reporting procedure to address security issues quickly. **Important**: If you have a security concern or believe you have found a vulnerability in any part of our infrastructure, please contact us at [security@neon.tech](mailto:security@neon.tech). If you need to share sensitive information, we can provide you with a security contact number through [Signal](https://signal.org/). ## Our commitment to solving security issues - We will respond to your report within three business days with an evaluation and expected resolution date. - We will handle your report with strict confidentiality and not share any personal details with third parties without your permission. - We will keep you informed of the progress towards resolving the problem. - After the report has been resolved, we will credit the finding to you in our public `security.txt` document, unless you prefer to stay anonymous. - If we need to access proprietary information or personal data stored in Neon to investigate or respond to a security report, we shall act in good faith and in compliance with applicable confidentiality, personal data protection, and other obligations. We strive to resolve all problems quickly and publicize any discoveries after their resolution. ## Bug bounty program with HackerOne Neon offers a public bug bounty program. If you discover a vulnerability, report it through our [bug bounty program](https://hackerone.com/neon_bbp). ## How to disclose vulnerabilities Neon pays close attention to the proper security of its information and communication systems. Despite these efforts, it is not possible to entirely exclude the existence of security vulnerabilities. If you identify a security vulnerability, please proceed as follows under the principle of responsible disclosure: - Report the security vulnerability to Neon by contacting us at [security@neon.tech](mailto:security@neon.tech). Provide as much information about the security vulnerability as possible. - Do not exploit the security vulnerability; for example, by using it to breach data, change the data of third parties, or deliberately disrupt the availability of the service. - All activities relating to the discovery of the security vulnerability should be performed within the framework of the law. - Do not inform any third parties about the security vulnerability. All communication regarding the security vulnerability will be coordinated by Neon and our partners. - If the above conditions are respected, Neon will not take any legal steps against the party that reported the security vulnerability. - In the event of a non-anonymous report, Neon will inform the party that submitted the report of the steps it intends to take and the progress toward closing the security vulnerability. --- # Source: https://neon.com/llms/serverless-serverless-driver.txt # Neon serverless driver > The Neon serverless driver documentation details the installation, configuration, and usage of the serverless driver for connecting applications to Neon databases, enabling efficient and scalable database interactions in a serverless environment. ## Source - [Neon serverless driver HTML](https://neon.com/docs/serverless/serverless-driver): The original HTML version of this documentation The [Neon serverless driver](https://github.com/neondatabase/serverless) is a low-latency Postgres driver for JavaScript and TypeScript that allows you to query data from serverless and edge environments over **HTTP** or **WebSockets** in place of TCP. The driver's low-latency capability is due to [message pipelining and other optimizations](https://neon.com/blog/quicker-serverless-postgres). **Important** The Neon serverless driver is now generally available (GA): The GA version of the Neon serverless driver, v1.0.0 and higher, requires Node.js version 19 or higher. It also includes a **breaking change** but only if you're calling the HTTP query template function as a conventional function. For details, please see the [1.0.0 release notes](https://github.com/neondatabase/serverless/pull/149) or read the [blog post](https://neon.com/blog/serverless-driver-ga). When to query over HTTP vs WebSockets: - **HTTP**: Querying over an HTTP [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) request is faster for single, non-interactive transactions, also referred to as "one-shot queries". Issuing [multiple queries](https://neon.com/docs/serverless/serverless-driver#issue-multiple-queries-with-the-transaction-function) via a single, non-interactive transaction is also supported. See [Use the driver over HTTP](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-http). - **WebSockets**: If you require session or interactive transaction support or compatibility with [node-postgres](https://node-postgres.com/) (the popular **npm** `pg` package), use WebSockets. See [Use the driver over WebSockets](https://neon.com/docs/serverless/serverless-driver#use-the-driver-over-websockets). **Tip** AI Rules available: Working with AI coding assistants? Check out our [AI rules for the Neon Serverless Driver](https://neon.com/docs/ai/ai-rules-neon-serverless) to help your AI assistant generate better code for serverless database connections. ## Install the Neon serverless driver You can install the driver with your preferred JavaScript package manager. For example: ```shell npm install @neondatabase/serverless ``` The driver includes TypeScript types (the equivalent of `@types/pg`). No additional installation is required. **Note**: The Neon serverless driver is also available as a [JavaScript Registry (JSR)](https://jsr.io/docs/introduction) package: [https://jsr.io/@neon/serverless](https://jsr.io/@neon/serverless). The JavaScript Registry (JSR) is a package registry for JavaScript and TypeScript. JSR works with many runtimes (Node.js, Deno, browsers, and more) and is backward compatible with `npm`. ## Configure your Neon database connection You can obtain a connection string for your database by clicking the **Connect** button on your **Project Dashboard**. Your Neon connection string will look something like this: ```shell DATABASE_URL=postgresql://[user]:[password]@[neon_hostname]/[dbname] ``` The examples that follow assume that your database connection string is assigned to a `DATABASE_URL` variable in your application's environment file. ## Use the driver over HTTP The Neon serverless driver uses the [neon](https://github.com/neondatabase/serverless/blob/main/CONFIG.md#neon-function) function for queries over HTTP. The function returns a query function that can only be used as a template function for improved safety against SQL injection vulnerabilities. For example: ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); const id = 1; // Safe and convenient template function usage const result = await sql`SELECT * FROM table WHERE id = ${id}`; // For manually parameterized queries, use the query() function const result = await sql.query('SELECT * FROM table WHERE id = $1', [id]); // For interpolating trusted strings (like column or table names), use the unsafe() function const table = condition ? 'table1' : 'table2'; // known-safe string values const result = await sql`SELECT * FROM ${sql.unsafe(table)} WHERE id = ${id}`; // Alternatively, use template literals for known-safe values const table = condition ? sql`table1` : sql`table2`; const result = await sql`SELECT * FROM ${table} WHERE id = ${id}`; ``` SQL template queries are fully composable, including those with parameters: ```javascript const name = 'Olivia'; const limit = 1; const whereClause = sql`WHERE name = ${name}`; const limitClause = sql`LIMIT ${limit}`; // Parameters are numbered appropriately at query time const result = await sql`SELECT * FROM table ${whereClause} ${limitClause}`; ``` You can use raw SQL queries or tools such as [Drizzle-ORM](https://orm.drizzle.team/docs/quick-postgresql/neon), [kysely](https://github.com/kysely-org/kysely), [Zapatos](https://jawj.github.io/zapatos/), and others for type safety. Tab: Node.js ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); const posts = await sql`SELECT * FROM posts WHERE id = ${postId}`; // or using query() for parameterized queries const posts = await sql.query('SELECT * FROM posts WHERE id = $1', [postId]); // `posts` is now [{ id: 12, title: 'My post', ... }] (or undefined) ``` Tab: Drizzle-ORM ```typescript import { drizzle } from 'drizzle-orm/neon-http'; import { eq } from 'drizzle-orm'; import { neon } from '@neondatabase/serverless'; import { posts } from './schema'; export default async () => { const postId = 12; const sql = neon(process.env.DATABASE_URL!); const db = drizzle(sql); const [onePost] = await db.select().from(posts).where(eq(posts.id, postId)); return new Response(JSON.stringify({ post: onePost })); }; ``` Tab: Vercel Edge Function ```javascript import { neon } from '@neondatabase/serverless'; export default async (req: Request) => { const sql = neon(process.env.DATABASE_URL); const posts = await sql`SELECT * FROM posts WHERE id = ${postId}`; // or using query() for parameterized queries const posts = await sql.query('SELECT * FROM posts WHERE id = $1', [postId]); return new Response(JSON.stringify(posts)); } export const config = { runtime: 'edge', }; ``` Tab: Vercel Serverless Function ```ts import { neon } from '@neondatabase/serverless'; import type { NextApiRequest, NextApiResponse } from 'next'; export default async function handler(request: NextApiRequest, res: NextApiResponse) { const sql = neon(process.env.DATABASE_URL!); const posts = await sql`SELECT * FROM posts WHERE id = ${postId}`; // or using query() for parameterized queries const posts = await sql.query('SELECT * FROM posts WHERE id = $1', [postId]); return res.status(200).json(posts); } ``` **Note**: The maximum request size and response size for queries over HTTP is 64 MB. ### neon function configuration options The `neon(...)` function returns a query function that can be used as a template function, with additional properties for special cases: ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); // Use as a template function (recommended) const rows = await sql`SELECT * FROM posts WHERE id = ${postId}`; // Use query() for manually parameterized queries const rows = await sql.query('SELECT * FROM posts WHERE id = $1', [postId]); // Use unsafe() for trusted string interpolation const table = 'posts'; // trusted value const rows = await sql`SELECT * FROM ${sql.unsafe(table)} WHERE id = ${postId}`; ``` By default, the query function returns only the rows resulting from the provided SQL query, and it returns them as an array of objects where the keys are column names. For example: ```javascript const rows = await sql`SELECT * FROM posts WHERE id = ${postId}`; // -> [{ id: 12, title: "My post", ... }] ``` You can customize the return format using the configuration options `fullResults` and `arrayMode`. These options are available both on the `neon(...)` function and on the query function it returns. - `arrayMode: boolean`, `false` by default The default `arrayMode` value is `false`. When it is true, rows are returned as an array of arrays instead of an array of objects: ```javascript const sql = neon(process.env.DATABASE_URL, { arrayMode: true }); const rows = await sql`SELECT * FROM posts WHERE id = ${postId}`; // -> [[12, "My post", ...]] ``` Or, with the same effect when using query(): ```javascript const sql = neon(process.env.DATABASE_URL); const rows = await sql.query('SELECT * FROM posts WHERE id = $1', [postId], { arrayMode: true }); // -> [[12, "My post", ...]] ``` - `fullResults: boolean` The default `fullResults` value is `false`. When it is `true`, additional metadata is returned alongside the result rows, which are then found in the `rows` property of the return value. The metadata matches what would be returned by `node-postgres`: ```javascript const sql = neon(process.env.DATABASE_URL, { fullResults: true }); const results = await sql`SELECT * FROM posts WHERE id = ${postId}`; /* -> { rows: [{ id: 12, title: "My post", ... }], fields: [ { name: "id", dataTypeID: 23, ... }, { name: "title", dataTypeID: 25, ... }, ... ], rowCount: 1, rowAsArray: false, command: "SELECT" } */ ``` Or, with the same effect when using query(): ```javascript const sql = neon(process.env.DATABASE_URL); const results = await sql.query('SELECT * FROM posts WHERE id = $1', [postId], { fullResults: true, }); // -> { ... same as above ... } ``` - `fetchOptions: Record` The `fetchOptions` option can also be passed to either `neon(...)` or the `query` function. This option takes an object that is merged with the options to the `fetch` call. For example, to increase the priority of every database `fetch` request: ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL, { fetchOptions: { priority: 'high' } }); const rows = await sql`SELECT * FROM posts WHERE id = ${postId}`; ``` Or to implement a `fetch` timeout: ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); const abortController = new AbortController(); const timeout = setTimeout(() => abortController.abort('timed out'), 10000); const rows = await sql('SELECT * FROM posts WHERE id = $1', [postId], { fetchOptions: { signal: abortController.signal }, }); // throws an error if no result received within 10s clearTimeout(timeout); ``` For additional details, see [Options and configuration](https://github.com/neondatabase/serverless/blob/main/CONFIG.md#options-and-configuration). ### Issue multiple queries with the transaction() function The `transaction(queriesOrFn, options)` function is exposed as a property on the query function. It allows multiple queries to be executed within a single, non-interactive transaction. The first argument to `transaction()`, `queriesOrFn`, is either an array of queries or a non-async function that receives a query function as its argument and returns an array of queries. The array-of-queries case looks like this: ```javascript import { neon } from '@neondatabase/serverless'; const sql = neon(process.env.DATABASE_URL); const showLatestN = 10; const [posts, tags] = await sql.transaction( [sql`SELECT * FROM posts ORDER BY posted_at DESC LIMIT ${showLatestN}`, sql`SELECT * FROM tags`], { isolationLevel: 'RepeatableRead', readOnly: true, } ); ``` Or as an example of the function case: ```javascript const [authors, tags] = await neon(process.env.DATABASE_URL).transaction((txn) => [ txn`SELECT * FROM authors`, txn`SELECT * FROM tags`, ]); ``` The optional second argument to `transaction()`, `options`, has the same keys as the options to the ordinary query function — `arrayMode`, `fullResults` and `fetchOptions` — plus three additional keys that concern the transaction configuration. These transaction-related keys are: `isolationMode`, `readOnly` and `deferrable`. Note that options **cannot** be supplied for individual queries within a transaction. Query and transaction options must instead be passed as the second argument of the `transaction()` function. For example, this `arrayMode` setting is ineffective (and TypeScript won't compile it): `await sql.transaction([sql('SELECT now()', [], { arrayMode: true })])`. Instead, use `await sql.transaction([sql('SELECT now()')], { arrayMode: true })`. - `isolationMode` This option selects a Postgres [transaction isolation mode](https://www.postgresql.org/docs/current/transaction-iso.html). If present, it must be one of `ReadUncommitted`, `ReadCommitted`, `RepeatableRead`, or `Serializable`. - `readOnly` If `true`, this option ensures that a `READ ONLY` transaction is used to execute the queries passed. This is a boolean option. The default value is `false`. - `deferrable` If `true` (and if `readOnly` is also `true`, and `isolationMode` is `Serializable`), this option ensures that a `DEFERRABLE` transaction is used to execute the queries passed. This is a boolean option. The default value is `false`. For additional details, see [transaction(...) function](https://github.com/neondatabase/serverless/blob/main/CONFIG.md#transaction-function). ### Using transactions with JWT self-verification When using Row-Level Security (RLS) to secure backend SQL with the Neon serverless driver, you may need to set JWT claims within a transaction context. This is particularly useful for custom JWT verification flows in backend APIs, where you want to ensure user-specific access to rows according to RLS policies. Here's an example of how to use the `transaction()` function with self-verified JWT claims: ```javascript import { neon } from '@neondatabase/serverless'; // Example JWT verification function, typically in a separate auth utilitiy file (implement according to your auth provider) async function verifyJWT(jwtToken, jwksURL) { // Your JWT verification logic here // This should return the decoded payload return { payload: { sub: 'user123', email: 'user@example.com' } }; } const sql = neon(process.env.DATABASE_URL); // Get JWT token from request headers or context const jwtToken = req.headers.authorization?.replace('Bearer ', ''); const jwksURL = process.env.JWKS_URL; // Your JWKS endpoint // Verify the JWT and extract claims const { payload } = await verifyJWT(jwtToken, jwksURL); const claims = JSON.stringify(payload); // Use transaction to set JWT claims and query data const [, my_table] = await sql.transaction([ sql`SELECT set_config('request.jwt.claims', ${claims}, true)`, sql`SELECT * FROM my_table`, ]); ``` **Important**: When using JWT self-verification with RLS, ensure your database connection string uses a role that does **not** have the `BYPASSRLS` attribute. Avoid using the `neondb_owner` role in your connection string, as it bypasses Row-Level Security policies. This pattern allows you to: - Verify JWTs using your own authentication logic - Set the JWT claims in the database session context - Access JWT claims in your RLS policies - Execute multiple queries within a single transaction while maintaining the auth context ## Use the driver over WebSockets The Neon serverless driver supports the [Pool and Client](https://github.com/neondatabase/serverless?tab=readme-ov-file#pool-and-client) constructors for querying over WebSockets. The `Pool` and `Client` constructors, provide session and transaction support, as well as `node-postgres` compatibility. You can find the API guide for the `Pool` and `Client` constructors in the [node-postgres](https://node-postgres.com/) documentation. Consider using the driver with `Pool` or `Client` in the following scenarios: - You already use `node-postgres` in your code base and would like to migrate to using `@neondatabase/serverless`. - You are writing a new code base and want to use a package that expects a `node-postgres-compatible` driver. - Your backend service uses sessions / interactive transactions with multiple queries per connection. You can use the Neon serverless driver in the same way you would use `node-postgres` with `Pool` and `Client`. Where you usually import `pg`, import `@neondatabase/serverless` instead. Tab: Node.js ```javascript import { Pool } from '@neondatabase/serverless'; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const posts = await pool.query('SELECT * FROM posts WHERE id =$1', [postId]); pool.end(); ``` Tab: Prisma ```typescript import { Pool, neonConfig } from '@neondatabase/serverless'; import { PrismaNeon } from '@prisma/adapter-neon'; import { PrismaClient } from '@prisma/client'; import dotenv from 'dotenv'; import ws from 'ws'; dotenv.config(); neonConfig.webSocketConstructor = ws; const connectionString = `${process.env.DATABASE_URL}`; const pool = new Pool({ connectionString }); const adapter = new PrismaNeon(pool); const prisma = new PrismaClient({ adapter }); async function main() { const posts = await prisma.post.findMany(); } main(); ``` Tab: Drizzle-ORM ```typescript import { drizzle } from 'drizzle-orm/neon-serverless'; import { eq } from 'drizzle-orm'; import { Pool } from '@neondatabase/serverless'; import { posts } from './schema'; export default async () => { const postId = 12; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const [onePost] = await db.select().from(posts).where(eq(posts.id, postId)); ctx.waitUntil(pool.end()); return new Response(JSON.stringify({ post: onePost })); }; ``` Tab: Vercel Edge Function ```javascript import { Pool } from '@neondatabase/serverless'; export default async (req: Request, ctx: any) => { const pool = new Pool({connectionString: process.env.DATABASE_URL}); await pool.connect(); const posts = await pool.query('SELECT * FROM posts WHERE id = $1', [postId]); ctx.waitUntil(pool.end()); return new Response(JSON.stringify(post), { headers: { 'content-type': 'application/json' } }); } export const config = { runtime: 'edge', }; ``` Tab: Vercel Serverless Function ```ts import { Pool } from '@neondatabase/serverless'; import type { NextApiRequest, NextApiResponse } from 'next'; export default async function handler(request: NextApiRequest, res: NextApiResponse) { const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const posts = await pool.query('SELECT * FROM posts WHERE id = $1', [postId]); await pool.end(); return res.status(500).send(post); } ``` ### Pool and Client usage notes - In Node.js and some other environments, there's no built-in WebSocket support. In these cases, supply a WebSocket constructor function. ```javascript import { Pool, neonConfig } from '@neondatabase/serverless'; import ws from 'ws'; neonConfig.webSocketConstructor = ws; ``` - In serverless environments such as Vercel Edge Functions or Cloudflare Workers, WebSocket connections can't outlive a single request. That means `Pool` or `Client` objects must be connected, used and closed within a single request handler. Don't create them outside a request handler; don't create them in one handler and try to reuse them in another; and to avoid exhausting available connections, don't forget to close them. For examples that demonstrate these points, see [Pool and Client](https://github.com/neondatabase/serverless?tab=readme-ov-file#pool-and-client). ### Advanced configuration options For advanced configuration options, see [neonConfig configuration](https://github.com/neondatabase/serverless/blob/main/CONFIG.md#neonconfig-configuration), in the Neon serverless driver GitHub readme. ## Developing locally with the Neon serverless driver The Neon serverless driver enables you to query data over **HTTP** or **WebSockets** instead of TCP, even though Postgres does not natively support these connection methods. To use the Neon serverless driver locally, you must run a local instance of Neon's proxy and configure it to connect to your local Postgres database. For a step-by-step guide to setting up a local environment, refer to this community guide: [Local Development with Neon](https://neon.com/guides/local-development-with-neon). The guide demonstrates how to use a [community-developed Docker Compose file](https://github.com/TimoWilhelm/local-neon-http-proxy) to configure a local Postgres database and a Neon proxy service. This setup allows connections over both WebSockets and HTTP. ## Example applications Explore the example applications that use the Neon serverless driver. ### UNESCO World Heritage sites app Neon provides an example application to help you get started with the Neon serverless driver. The application generates a `JSON` listing of the 10 nearest UNESCO World Heritage sites using IP geolocation (data copyright © 1992 – 2022 UNESCO/World Heritage Centre). There are different implementations of the application to choose from. - [Raw SQL + Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-rawsql): Demonstrates using raw SQL with Neon's serverless driver on Vercel Edge Functions - [Raw SQL via https + Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-http): Demonstrates Neon's serverless driver over HTTP on Vercel Edge Functions - [Raw SQL + Cloudflare Workers](https://github.com/neondatabase/serverless-cfworker-demo): Demonstrates using the Neon serverless driver on Cloudflare Workers and employs caching for high performance. - [Kysely + Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-kysely): Demonstrates using kysely and kysely-codegen with Neon's serverless driver on Vercel Edge Functions - [Zapatos + Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-zapatos): Demonstrates using Zapatos with Neon's serverless driver on Vercel Edge Functions - [Neon + pgTyped on Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-pgtyped): Demonstrates using pgTyped with Neon's serverless driver on Vercel Edge Functions - [Neon + Knex on Vercel Edge Functions](https://github.com/neondatabase/neon-vercel-knex): Demonstrates using Knex with Neon's serverless driver on Vercel Edge Functions ### Ping Thing The Ping Thing application pings a Neon Serverless Postgres database using a Vercel Edge Function and shows the journey your request makes. You can read more about this application in the accompanying blog post: [How to use Postgres at the Edge](https://neon.com/blog/how-to-use-postgres-at-the-edge) - [Ping Thing](https://github.com/neondatabase/ping-thing): Ping a Neon Serverless Postgres database using a Vercel Edge Function to see the journey your request makes ## Neon serverless driver GitHub repository and changelog The GitHub repository and [changelog](https://github.com/neondatabase/serverless/blob/main/CHANGELOG.md) for the Neon serverless driver are found [here](https://github.com/neondatabase/serverless). ## References - [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) - [node-postgres](https://node-postgres.com/) - [Drizzle-ORM](https://orm.drizzle.team/docs/quick-postgresql/neon) - [Schema migration with Neon Postgres and Drizzle ORM](https://neon.com/docs/guides/drizzle-migrations) - [kysely](https://github.com/kysely-org/kysely) - [Zapatos](https://jawj.github.io/zapatos/) - [Vercel Edge Functions](https://vercel.com/docs/functions/edge-functions) - [Cloudflare Workers](https://developers.cloudflare.com/workers/) - [Use Neon with Cloudflare Workers](https://neon.com/docs/guides/cloudflare-workers) --- # Source: https://neon.com/llms/use-cases-ai-agents.txt # Neon for AI Agent Platforms > The document "Neon for AI Agent Platforms" outlines how Neon's serverless Postgres architecture supports AI agent platforms by providing instant provisioning, autoscaling, and integrated services like Auth and a PostgREST-compatible Data API, enabling efficient and scalable backend management for developers. ## Source - [Neon for AI Agent Platforms HTML](https://neon.com/use-cases/ai-agents): The original HTML version of this documentation Apply for the Agent Plan: https://neon.com/use-cases/ai-agents **[Learn more about the Neon Agent Plan](https://neon.com/programs/agents#agent-plan-pricing) >** ## The Neon Stack For Agents The Neon architecture aligns with how agents work: **Serverless Postgres at the core.** Neon's backend is powered by a serverless Postgres engine built on separated compute and storage. It provisions instantly, scales automatically, and idles to zero when not in use - perfect for the bursty, on-demand workloads that agents create. **With integrated services for full-stack backends.** Around that core, Neon includes Auth and a PostgREST-compatible Data API, so agents and developers can assemble complete, production-ready backends without stitching multiple services together. **All API-first and programmable.** Every capability - provisioning, quotas, branching, and fleet management - is exposed through the Neon API, giving developers and agents precise control over their environments and usage at scale. **And version-aware by design.** Neon's copy-on-write storage makes time travel effortless. Branching, snapshots, and point-in-time recovery enable undo, checkpoints, and safe experimentation across millions of databases. ## Serverless Postgres, API-first At the core of Neon is a serverless Postgres architecture that [separates compute from storage](https://neon.com/blog/architecture-decisions-in-neon). Each database runs on ephemeral computes while the data itself lives on durable, high-performance storage. **This architecture makes it possible for agents to provision databases instantly on demand, operate them at massive scale, and still keep costs under control.** Tens of thousands of projects can spin up and idle as users create apps, all programmatically, without intervention from you. ## Instant Autoscaling and Scale-to-Zero Traditional database management falls apart when every agent action can trigger new infrastructure. Neon serverless model handles this complexity automatically: - [Compute scales up and down in real time based on workload](https://neon.com/docs/introduction/autoscaling) - [Scale-to-zero ensures that idle databases cost you nothing](https://neon.com/docs/introduction/scale-to-zero) while remaining instantly accessible This combination gives agent builders a sustainable model for large fleets: **you can create thousands of databases without worrying about resource exhaustion or runaway bills.** ## Auth That Speaks Postgres Every app needs authentication, and agents shouldn't have to reinvent it. **[Neon Auth](https://neon.com/docs/neon-auth/overview) lets you build secure, multi-tenant systems [without extra glue code](https://neon.com/blog/databutton-neon-integration)**. It issues JWTs that your agent or front-end can use directly in database queries or through the [Neon Data API](https://neon.com/docs/data-api/get-started). Each token maps to a Postgres role, enforcing granular access at the data level. And because Neon Auth supports standard JWKS configuration, you can also plug in external providers. ## A PostgREST-Compatible Data API, Built In Giving your agents direct access to the database is simple with the [Neon Data API](https://neon.com/docs/data-api/get-started). It exposes each database (and every branch) as a REST endpoint you can query over HTTPS. Fully PostgREST-compatible. Under the hood, Neon's Data API is a [Rust-based re-implementation of PostgREST that runs natively in our proxy fleet](https://neon.com/blog/a-postgrest-compatible-data-api-now-on-neon). It's lean, multi-tenant, and designed to scale across thousands of databases efficiently. Every Neon branch has its own API endpoint, perfect for preview environments, checkpoints, or dev branches. ## Building Checkpoints with Snapshots and Branching **Vibe coders experiment constantly, going back and forward between versions - and sometimes breaking things. Neon's [branching](https://neon.com/docs/introduction/branching) and [snapshots API](https://neon.com/docs/ai/ai-database-versioning) turn this into a feature, not a risk.** Branching, built on our copy-on-write storage, enables [instant point-in-time recovery](https://neon.com/docs/introduction/branch-restore) for any database. Developers and agents can migrate schemas or revert mistakes without complex restores. The Snapshots API builds on this foundation to create [agent-friendly, restorable checkpoints](https://neon.com/blog/checkpoints-for-agents-with-neon-snapshots). Agents can capture a moment-in-time version of the database (schema and data) and later roll back or compare states. ## Quotas, Fleet Control, and Dedicated Pricing **We've been backing agent platforms since the start, and our API has evolved to support the needs of large fleets operated by small engineering teams.** [The Neon API lets you manage not only infra but also setting quotas, tracking compute/storage usage per project, billing limits, and much more](https://neon.com/blog/provision-postgres-neon-api). Combined with usage-based pricing and agent-specific plans, it gives platform builders fine-grained control over cost, scale, and growth. ## Documentation & Case Studies to Get Started To get inspired, explore how others are building and scaling their agents on top of Neon: - [Replit](https://neon.com/blog/replit-app-history-powered-by-neon-branches) - [Retool](https://neon.com/blog/retool-becomes-the-platform-for-enterprise-appgen) - [Anything](https://neon.com/blog/from-idea-to-full-stack-app-in-one-conversation-with-create) - [Databutton](https://neon.com/blog/databutton-neon-integration) - [Vapi](https://neon.com/blog/vapi-voice-agents-neon) - [Dyad](https://neon.com/blog/dyad-brings-postgres-to-local-ai-app-building-powered-by-neon) - [xpander.ai](https://neon.com/blog/xpander-ai-agents-slack-neon-backend) For instructions on using the Neon API to provision and manage backends on behalf of your users, see [Neon for Platforms Documentation](https://neon.com/docs/guides/platform-integration-intro). Don't hesitate to [contact us](https://neon.com/contact-sales) as well. To learn more about the Agent Plan, [see the details on this page](https://neon.com/programs/agents#agent-plan-pricing) or [fill out the application form directly, at the top of this page](https://neon.com/use-cases/ai-agents#agent-form). --- # Source: https://neon.com/llms/use-cases-use-cases-overview.txt # Neon use cases > The document outlines various use cases for Neon, detailing how it can be utilized for scalable, serverless PostgreSQL databases in different application scenarios. ## Source - [Neon use cases HTML](https://neon.com/docs/use-cases/use-cases-overview): The original HTML version of this documentation - [SaaS apps](https://neon.com/use-cases/postgres-for-saas): Build faster with Neon using autoscaling, database branching, and serverless operations - [Serverless apps](https://neon.com/use-cases/serverless-apps): Autoscale with traffic using real-time compute scaling and usage-based pricing - [Database-per-tenant](https://neon.com/use-cases/database-per-tenant): Data isolation without overhead using instant provisioning and scale-to-zero - [Dev/Test](https://neon.com/use-cases/dev-test): Production-like environments with database branching and cost efficiency - [AI agents](https://neon.com/use-cases/ai-agents): Deploy Postgres via AI agents with instant provisioning and simple APIs --- # Source: https://neon.com/llms/workflows-claimable-database-integration.txt # Claimable database integration guide > The "Claimable Database Integration Guide" outlines the steps for integrating claimable databases within Neon, detailing the process for setting up, managing, and utilizing these databases effectively in Neon's environment. ## Source - [Claimable database integration guide HTML](https://neon.com/docs/workflows/claimable-database-integration): The original HTML version of this documentation ## Overview The project transfer functionality enables you to provision fully-configured Postgres databases on behalf of your users and seamlessly transition ownership. This capability eliminates the technical overhead of database setup while ensuring your users maintain complete control of their database resources. ## Simplified workflow 1. **Create a Neon project** on behalf of your user in your account or organization - This provides them with a Postgres connection string for their application immediately 2. **Create a transfer request** for the project - This generates a unique, time-limited transfer request ID 3. **Share a claim URL** with your user - This URL contains the project ID and transfer request ID 4. **User claims the project** - When they click the URL, Neon prompts them to transfer the project to their account ## Step-by-step guide ## Create a Neon project Use the Neon [create project API](https://api-docs.neon.tech/reference/createproject) to create a new project that you intend to transfer to your user. The minimum request body is `project: {}` as all settings are optional. ### API endpoint ```http POST https://console.neon.tech/api/v2/projects ``` ### Example request ```bash curl -X POST 'https://console.neon.tech/api/v2/projects' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {your_api_key_here}' \ --header 'Content-Type: application/json' \ --data '{ "project": { "name": "new-project-name", "region_id": "aws-us-east-1", "pg_version": 17, "org_id": "org-cool-breeze-12345678" } }' ``` This creates a new project with: - A default branch named `main` - A default database named `neondb` - A default database role named `neondb_owner` - A project named `new-project-name` (defaults to the project ID if not specified) - The project in the `org-cool-breeze-12345678` organization - PostgreSQL version 17 in the `aws-us-east-1` region (these settings are permanent) ### Example response Below is an abbreviated example of the response. For brevity, this documentation shows only key fields. For the complete response structure and all possible fields, see the [API documentation](https://api-docs.neon.tech/reference/createproject). ```json { "project": { "id": "your-project-id", "name": "new-project-name", "owner_id": "org-the-owner-id", "org_id": "org-the-owner-id" }, "connection_uris": [ { "connection_uri": "postgresql://neondb_owner:{password}@ep-cool-shape-123456.us-east-1.aws.neon.tech/neondb?sslmode=require&channel_binding=require" } ], "branch": {}, "databases": [{}], "endpoints": [{}], "operations": [{}], "roles": [{}] } ``` Your user will need the connection string from the response (`connection_uri`) to [connect to the Neon database](https://neon.com/docs/get-started/connect-neon). The `{password}` placeholder represents the actual password generated for the database. You'll also use the project `id` to create a transfer request. ## Create a transfer request With your project created, use the Neon [project transfer request API](https://api-docs.neon.tech/reference/createprojecttransferrequest) to generate a transfer request. You can create this request immediately or at a later time when you're ready to transfer the project. Each transfer request has a configurable expiration period, specified by the `ttl_seconds` parameter. ### API endpoint ```http POST https://console.neon.tech/api/v2/projects/{project_id}/transfer_requests ``` ### Example request ```bash curl -X POST 'https://console.neon.tech/api/v2/projects/{project_id}/transfer_requests' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {your_api_key_here}' \ --header 'Content-Type: application/json' \ --data '{ "ttl_seconds": 604800 }' ``` This example sets a one-week expiration (604,800 seconds). The default is 86,400 seconds (24 hours). ### Example response ```json { "id": "389ad814-9514-1cac-bc04-2f194815db76", "project_id": "your-project-id", "created_at": "2025-05-18T19:35:23Z", "expires_at": "2025-05-25T19:35:23Z" } ``` If transfer requests are not enabled for your account, you'll receive: ```json { "request_id": "cb1e1228-19f9-4904-8bd5-2dbf17d911a2", "code": "", "message": "project transfer requests are not enabled for this account" } ``` ## Share the claim URL Construct a claim URL to share with your user using the following format: ```http https://console.neon.tech/app/claim?p={project_id}&tr={transfer_request_id}&ru={redirect_url} ``` Where: - `p={project_id}` - The project ID being transferred - `tr={transfer_request_id}` - The transfer request `id` from the previous step - `ru={redirect_url}` (optional) - A URL-encoded destination where the user is redirected after successfully claiming the project - Without this parameter, users remain on the Neon project page after claiming - This allows your application to detect successful claims when users return to your site, enabling you to trigger next steps in your onboarding flow ### User communication When sharing the claim URL, inform your user that: - They'll need a Neon account to claim the project (they can create one during the claim process) - The link will expire at the time shown in the `expires_at` field - After claiming, they'll have full ownership of the project - The database connection string remains unchanged (though they should update the password for security) ## User claims the project ### Via browser (recommended) When your user clicks the claim URL: 1. Neon prompts them to log in or create an account 2. After authentication, Neon displays a confirmation screen 3. They select their destination Neon organization 4. Upon confirmation, Neon transfers the project 5. The user is then: - Redirected to your application if `ru` parameter was provided, allowing you to detect the successful claim and continue your onboarding flow - Kept on the Neon project page if no redirect URL was specified ### Via API Alternatively, users can accept the transfer request programmatically using the [accept project transfer request API](https://api-docs.neon.tech/reference/acceptprojecttransferrequest). #### API endpoint ```http PUT https://console.neon.tech/api/v2/projects/{project_id}/transfer_requests/{request_id} ``` #### Example request (transfer to organization) ```bash curl -X PUT 'https://console.neon.tech/api/v2/projects/{project_id}/transfer_requests/{request_id}' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {users_api_key_here}' \ --header 'Content-Type: application/json' \ --data '{ "org_id": "org-cool-breeze-12345678" }' ``` Without the `org_id` parameter, the project transfers to the user's personal account. With it, the project transfers to the specified organization where the user has membership. ## Important notes ### Transfer request behavior - **Expiration**: Requests expire after the specified `ttl_seconds` (default: 24 hours). Once expired, you must create a new transfer request - **One-time use**: Each transfer request can only be used once - **Already claimed**: If a project has already been claimed, subsequent attempts will fail with an error - **Vercel orgs not supported**: Transferring a project into a Vercel-managed Neon [organization](https://neon.com/docs/reference/glossary#organization) via the claim flow is not supported, meaning that if you created your Neon account through the [Vercel-managed integration](https://neon.com/docs/guides/vercel-managed-integration), you cannot claim projects into the Neon organizaton created by that integration. ### Security considerations - **URL security**: Share claim URLs through secure channels as anyone with the URL can claim the project - **Password rotation**: Instruct users to change their database password immediately after claiming - **Access revocation**: Once transferred, you lose all access to the project unless the new owner grants permissions ### Technical details - **Connection persistence**: Database connection strings remain valid after transfer - **Organization transfers**: Users must be members of the target organization - **Organization ID format**: `org-[descriptive-term]-[numeric-id]` (e.g., `org-cool-breeze-12345678`) - **Vercel organization limitation**: Projects cannot be claimed into Vercel organizations ## Example use cases - **SaaS applications** - Provision databases for your SaaS users that they can later claim and manage - **Development agencies** - Create database projects for clients and transfer ownership upon project completion - **Educational platforms** - Set up pre-configured database environments for students - **Demo environments** - Create ready-to-use demo databases that prospects can claim - **Team environments** - Provision project databases for team members to claim into their organization For a working implementation of claimable databases, try [Neon Launchpad](https://neon.new/). This service demonstrates the complete flow: users receive a Postgres connection string immediately without creating an account, and databases remain active for 72 hours. To retain the database beyond this period, users claim it by creating a Neon account using the provided transfer URL. See the [Neon Launchpad documentation](https://neon.com/docs/reference/neon-launchpad) for implementation details. This same pattern enables SaaS providers to offer instant database provisioning while allowing users to take ownership when ready. ## Troubleshooting | Issue | Solution | | ------------------------------------- | ----------------------------------------------------------------------------------------------- | | Claim URL expired | Create a new transfer request and generate a new claim URL | | User receives error when claiming | Verify the project exists and the transfer request hasn't been used | | Project doesn't appear after claiming | Refresh the Neon Console or log out and back in | | "Transfer requests not enabled" error | [Contact our partnership team](https://neon.com/partners#partners-apply) to enable this private preview feature | | Organization transfer fails | Verify user membership in the target organization and correct `org_id` format | | Already claimed error | The transfer request has been used; create a new one if needed | ## Further resources - [Create project API reference](https://api-docs.neon.tech/reference/createproject) - [Create project transfer request API reference](https://api-docs.neon.tech/reference/createprojecttransferrequest) - [Accept project transfer request API reference](https://api-docs.neon.tech/reference/acceptprojecttransferrequest) - [Neon API documentation](https://neon.com/docs/reference/api-reference) - [Managing projects](https://neon.com/docs/manage/projects) - [Managing API keys](https://neon.com/docs/manage/api-keys) - [Managing organizations](https://neon.com/docs/manage/organizations) --- # Source: https://neon.com/llms/workflows-data-anonymization.txt # Data anonymization > The "Data Anonymization" document outlines the process and techniques for anonymizing sensitive data within Neon databases, ensuring compliance with privacy regulations while maintaining data utility for analysis. ## Source - [Data anonymization HTML](https://neon.com/docs/workflows/data-anonymization): The original HTML version of this documentation **Note** Beta: This feature is in Beta. Please give us [Feedback](https://console.neon.tech/app/projects?modal=feedback) from the Neon Console or by connecting with us on [Discord](https://discord.gg/92vNTzKDGp). Need to test against production data without exposing sensitive information? Anonymized branches let you create development copies with masked personally identifiable information (PII) - such as emails, phone numbers, and other sensitive data. Neon uses [PostgreSQL Anonymizer](https://postgresql-anonymizer.readthedocs.io/) for static data masking, and applies masking rules when you create or update the branch. This approach gives you realistic test data while protecting user privacy and supporting compliance requirements like GDPR. **Key characteristics:** - **Static masking**: Data is masked once during branch creation or when you rerun anonymization - **PostgreSQL Anonymizer integration**: Uses the [PostgreSQL Anonymizer extension's](https://neon.com/docs/extensions/postgresql-anonymizer) masking functions - **Branch-specific rules**: You can define different masking rules for each anonymized Neon branch **Info** Static versus dynamic masking: This feature uses **static masking**, which permanently transforms data in the branch when anonymization runs. Unlike dynamic masking (which masks data during queries), static masking creates an actual masked copy of the data. To get fresh data from the parent, create a new anonymized branch. ## Create a branch with anonymized data Tab: Console Select **Anonymized data** as the data option when creating a new branch. 1. Navigate to your project in the Neon Console 2. Select **Projects** -> **Branches** from the sidebar 3. Click **New Branch** 4. In the **Create new branch** dialog: - Select your **Parent branch** (typically `production` or `main`) - (Optional) Enter a **Branch name** - (Optional) **Automatically delete branch after** is checked by default with 1 day selected. You can change it, uncheck it, or leave it as is to automatically delete the branch after the specified time. - Under data options, select **Anonymized data** 5. Click **Create** After creation, the Console loads the [Data Masking](https://neon.com/docs/workflows/data-anonymization#manage-masking-rules) page where you define and execute anonymization rules for your branch. Tab: API Use the [Create anonymized branch](https://api-docs.neon.tech/reference/createprojectbranchanonymized) endpoint, for example: ```bash curl -X POST \ 'https://console.neon.tech/api/v2/projects/{project_id}/branch_anonymized' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "masking_rules": [ { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "email", "masking_function": "anon.dummy_free_email()" } ], "start_anonymization": true }' ``` **Request parameters:** - `masking_rules` (optional): Array of masking rules to apply to the branch. Each rule specifies: - `database_name`: Target database - `schema_name`: Target schema (typically `public`) - `table_name`: Table containing sensitive data - `column_name`: Column to mask - `masking_function`: PostgreSQL Anonymizer function to apply - `start_anonymization` (optional): Set to `true` to automatically start anonymization after branch creation The API supports all PostgreSQL Anonymizer masking functions, providing more options than the Console UI. You can also export and import masking rules to manage them outside of Neon. ## Manage masking rules Tab: Console From the **Data Masking** page: 1. Select the schema, table, and column you want to mask. 2. Choose a masking function from the dropdown list (e.g., "Dummy Free Email" to execute `anon.dummy_free_email()`). The Console provides a curated list of common functions. For the full set of PostgreSQL Anonymizer functions, you must use the API. 3. Repeat for all sensitive columns. 4. When you are ready, click `Apply masking rules` to start the anonymization job. You can monitor its progress on this page or via the [API](https://neon.com/docs/workflows/data-anonymization#get-anonymization-status). > Important: Rerunning the anonymization process on the anonymized branch applies rules to previously anonymized data, not fresh data from the parent branch. To start from the parent's original data, create a new anonymized branch. Tab: API For complete API documentation with request/response examples, see the [API reference](https://neon.com/docs/workflows/data-anonymization#api-reference) section below. **Get masking rules** ```bash GET /projects/{project_id}/branches/{branch_id}/masking_rules ``` Retrieves all masking rules defined for the branch. **Update masking rules** ```bash PATCH /projects/{project_id}/branches/{branch_id}/masking_rules ``` Updates masking rules for the branch. After updating rules, use the start anonymization endpoint to apply the changes. **Start anonymization** ```bash POST /projects/{project_id}/branches/{branch_id}/anonymize ``` Starts or restarts the anonymization process for branches in `initialized`, `error`, or `anonymized` state. **Get anonymization status** ```bash GET /projects/{project_id}/branches/{branch_id}/anonymized_status ``` Returns the current state (`created`, `initialized`, `initialization_error`, `anonymizing`, `anonymized`, or `error`) and progress information. ## Common workflow 1. Create an anonymized branch from your production branch. 2. Define masking rules for sensitive columns (emails, names, addresses, etc.). 3. Apply the masking rules. 4. [Connect](https://neon.com/docs/connect/connect-from-any-app) your development environment to the anonymized branch. 5. When you need fresh data, create a new anonymized branch. ## How anonymization works When you create a branch with anonymized data: 1. Neon creates a new branch with the schema and data from the parent branch. 2. You define masking rules for tables and columns containing sensitive data: - **Console**: The Data Masking page opens automatically after branch creation. - **API**: Include masking rules in the creation request or add them later via the masking rules endpoint. 3. You apply the masking rules (in Console, click **Apply masking rules**), and the PostgreSQL Anonymizer extension masks the branch data. 4. You can update rules and rerun anonymization on the branch as needed. The parent branch data remains unchanged. Rerunning anonymization applies rules to the branch's current (already masked) data, not fresh data from the parent. **Note**: The branch is unavailable for connections while anonymization is in progress. ## Limitations - Currently cannot reset to parent, restore, or delete the read-write endpoint for anonymized branches. - Rerunning anonymization works on already-masked data. Create a new branch for fresh parent data. - Branch is unavailable during anonymization. - Masking does not enforce database constraints (e.g., primary keys can be masked as NULL). - The Console provides a curated subset of masking functions - use the API for all [PostgreSQL Anonymizer masking functions](https://postgresql-anonymizer.readthedocs.io/en/latest/masking_functions/). ## API reference The Neon API provides comprehensive control over anonymized branches, including access to all PostgreSQL Anonymizer masking functions and the ability to export/import masking rules for management outside of Neon. ### Create anonymized branch ``` POST /projects/{project_id}/branch_anonymized ``` Creates a new branch with anonymized data using PostgreSQL Anonymizer for static masking. **Request body parameters:** - `masking_rules` (optional): Array of masking rules to apply to the branch - `start_anonymization` (optional): Set to `true` to automatically start anonymization after creation **Example request:** ```bash curl -X POST \ 'https://console.neon.tech/api/v2/projects/{project_id}/branch_anonymized' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "masking_rules": [ { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "email", "masking_function": "anon.dummy_free_email()" }, { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "age", "masking_function": "anon.random_int_between(25,65)" } ], "start_anonymization": true }' ``` Details: Response body Returns the created branch object with `restricted_actions` indicating operations not allowed on anonymized branches (restore and delete read-write endpoint). ```json { "branch": { "id": "br-divine-feather-a1b2c3d4", "project_id": "purple-moon-12345678", "parent_id": "br-plain-hill-e5f6g7h8", "parent_lsn": "0/1C3C998", "name": "br-divine-feather-a1b2c3d4", "current_state": "init", "pending_state": "ready", "state_changed_at": "2025-10-16T02:58:58Z", "creation_source": "console", "primary": false, "default": false, "protected": false, "cpu_used_sec": 0, "compute_time_seconds": 0, "active_time_seconds": 0, "written_data_bytes": 0, "data_transfer_bytes": 0, "created_at": "2025-10-16T02:58:58Z", "updated_at": "2025-10-16T02:58:58Z", "init_source": "parent-data", "restricted_actions": [ { "name": "restore", "reason": "cannot restore anonymized branches" }, { "name": "delete-rw-endpoint", "reason": "cannot delete read-write endpoints for anonymized branches" } ] }, "endpoints": [ { "host": "ep-fragrant-breeze-a1b2c3d4.us-east-1.aws.neon.tech", "id": "ep-fragrant-breeze-a1b2c3d4", "project_id": "purple-moon-12345678", "branch_id": "br-divine-feather-a1b2c3d4", "autoscaling_limit_min_cu": 1, "autoscaling_limit_max_cu": 4, "region_id": "aws-us-east-1", "type": "read_write", "current_state": "init", "pending_state": "active", "settings": { "preload_libraries": { "use_defaults": false, "enabled_libraries": ["anon"] } }, "pooler_enabled": false, "pooler_mode": "transaction", "disabled": false, "passwordless_access": true, "creation_source": "console", "created_at": "2025-10-16T02:58:58Z", "updated_at": "2025-10-16T02:58:58Z", "proxy_host": "us-east-1.aws.neon.tech", "suspend_timeout_seconds": 0, "provisioner": "k8s-neonvm" } ], "operations": [ { "id": "262dc2ba-4d78-4b7b-bb9a-e29532385f3a", "project_id": "purple-moon-12345678", "branch_id": "br-divine-feather-a1b2c3d4", "action": "create_branch", "status": "running", "failures_count": 0, "created_at": "2025-10-16T02:58:58Z", "updated_at": "2025-10-16T02:58:58Z", "total_duration_ms": 0 }, { "id": "f9f52b52-9828-47e4-9842-c08c2a9c14d3", "project_id": "purple-moon-12345678", "branch_id": "br-divine-feather-a1b2c3d4", "endpoint_id": "ep-fragrant-breeze-a1b2c3d4", "action": "start_compute", "status": "scheduling", "failures_count": 0, "created_at": "2025-10-16T02:58:58Z", "updated_at": "2025-10-16T02:58:58Z", "total_duration_ms": 0 } ], "roles": [ { "branch_id": "br-divine-feather-a1b2c3d4", "name": "neondb_owner", "protected": false, "created_at": "2025-09-12T13:47:59Z", "updated_at": "2025-09-12T13:47:59Z" } ], "databases": [ { "id": 21560101, "branch_id": "br-divine-feather-a1b2c3d4", "name": "neondb", "owner_name": "neondb_owner", "created_at": "2025-09-12T13:47:59Z", "updated_at": "2025-09-12T13:47:59Z" } ], "connection_uris": [ { "connection_uri": "postgresql://neondb_owner:[REDACTED]@ep-fragrant-breeze-a1b2c3d4.us-east-1.aws.neon.tech/neondb?sslmode=require", "connection_parameters": { "database": "neondb", "password": "[REDACTED]", "role": "neondb_owner", "host": "ep-fragrant-breeze-a1b2c3d4.us-east-1.aws.neon.tech", "pooler_host": "ep-fragrant-breeze-a1b2c3d4-pooler.us-east-1.aws.neon.tech" } } ] } ``` ### Get anonymization status ``` GET /projects/{project_id}/branches/{branch_id}/anonymized_status ``` Retrieves the current status of an anonymized branch, including state and progress information. **Example request:** ```bash curl -X GET \ 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}/anonymized_status' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' ``` Details: Response body **State values:** `created`, `initialized`, `initialization_error`, `anonymizing`, `anonymized`, `error`. Response may include `failed_at` timestamp if operation failed. ```json { "branch_id": "br-aged-salad-637688", "project_id": "simple-truth-637688", "state": "anonymizing", "status_message": "Anonymizing table mydb.public.users (3/5)", "created_at": "2022-11-30T18:25:15Z", "updated_at": "2022-11-30T18:30:22Z" } ``` ### Start anonymization ``` POST /projects/{project_id}/branches/{branch_id}/anonymize ``` Starts or restarts the anonymization process for branches in `initialized`, `error`, or `anonymized` state. Applies all defined masking rules. **Example request:** ```bash curl -X POST \ 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}/anonymize' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' ``` Details: Response body ```json { "branch_id": "br-shiny-butterfly-w4393738", "project_id": "wild-sky-00366102", "state": "anonymized", "status_message": "Anonymization completed successfully (2 tables, 3 masking rules applied)", "created_at": "2025-11-01T14:01:39Z", "updated_at": "2025-11-01T14:01:41Z" } ``` ### Get masking rules ``` GET /projects/{project_id}/branches/{branch_id}/masking_rules ``` Retrieves all masking rules defined for the specified anonymized branch. **Example request:** ```bash curl -X GET \ 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}/masking_rules' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' ``` Details: Response body ```json { "masking_rules": [ { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "age", "masking_function": "anon.random_int_between(25,65)" }, { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "email", "masking_function": "anon.dummy_free_email()" } ] } ``` You can also query masking rules directly from the database: ```sql SELECT * FROM anon.pg_masking_rules; ``` ### Update masking rules ``` PATCH /projects/{project_id}/branches/{branch_id}/masking_rules ``` Updates masking rules for the specified anonymized branch. After updating, use the start anonymization endpoint to apply changes. **Example request:** ```bash curl -X PATCH \ 'https://console.neon.tech/api/v2/projects/{project_id}/branches/{branch_id}/masking_rules' \ -H 'Authorization: Bearer $NEON_API_KEY' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "masking_rules": [ { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "email", "masking_function": "anon.dummy_free_email()" } ] }' ``` Details: Response body Returns the updated list of masking rules for the branch. ```json { "masking_rules": [ { "database_name": "neondb", "schema_name": "public", "table_name": "users", "column_name": "email", "masking_function": "anon.dummy_free_email()" } ] } ``` ## Related resources - [PostgreSQL Anonymizer documentation](https://postgresql-anonymizer.readthedocs.io/) - [Neon branching overview](https://neon.com/docs/introduction/branching) - [Neon API reference](https://api-docs.neon.tech/reference/) --- ## Automate data anonymization with GitHub Actions **Important**: The GitHub Actions workflow below uses manual SQL commands with PostgreSQL Anonymizer. For automation using the new Console/API approach documented above, wait for upcoming post-beta improvements to better support automated anonymization. As an interim solution, you can automate anonymized branch creation using direct SQL commands as outlined below. Creating anonymized database copies for development, testing, or preview environments can be automated with GitHub Actions. The following workflow creates anonymized Neon branches automatically whenever a pull request is opened or updated. **What you'll achieve for each pull request:** - Automatic creation of a new Neon branch - Installation and initialization of the PostgreSQL Anonymizer extension - Application of predefined masking rules to sensitive fields - A ready-to-use anonymized dataset for use in CI, preview environments, or manual testing ## Requirements Before setting up the GitHub Action: - A **Neon project** with a populated parent branch - The following GitHub repository secrets: - `NEON_PROJECT_ID` - `NEON_API_KEY` **Tip**: The Neon GitHub integration can configure these secrets automatically. See [Neon GitHub integration](https://neon.com/docs/guides/neon-github-integration). ## Set up the GitHub Actions workflow Create a file at `.github/workflows/create-anon-branch.yml` (or similar) with the following content. It implements the same masking rules we used in the manual approach: **Note**: This simple workflow example covers the basics. For production use, consider enhancing it with error handling, retry logic, and additional security controls. ```yaml name: PR Open - Create Branch, Run Static Anonymization on: pull_request: types: opened jobs: on-pr-open: runs-on: ubuntu-latest steps: - name: Create branch uses: neondatabase/create-branch-action@v6 id: create-branch with: project_id: ${{ secrets.NEON_PROJECT_ID }} branch_name: anon-pr-${{ github.event.number }} role: neondb_owner api_key: ${{ secrets.NEON_API_KEY }} - name: Confirm branch created run: echo branch_id ${{ steps.create-branch.outputs.branch_id }} - name: Confirm connection possible run: | echo "Checking connection to the database..." psql "${{ steps.create-branch.outputs.db_url }}" -c "SELECT NOW();" - name: Enable anon extension run: | echo "Initializing the extension..." psql "${{ steps.create-branch.outputs.db_url }}" <