# Flatfile > ## Documentation Index --- # Source: https://flatfile.com/docs/coding-tutorial/101-your-first-listener/101.01-first-listener.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # 01: Creating Your First Listener > Learn to set up a basic Listener with Space configuration to define your data structure and workspace layout. If you aren't interested in a code-forward approach, we recommend starting with [AutoBuild](/getting-started/quickstart/autobuild.mdx), which uses AI to analyze your template or documentation and then automatically creates and deploys a [Blueprint](/core-concepts/blueprints) (for schema definition) and a [Listener](/core-concepts/listeners) (for validations and transformations) to your [Flatfile App](/core-concepts/apps). Once you've started with AutoBuild, you can always download your Listener code and continue building with code from there! ## What We're Building In this tutorial, we'll build a foundational Listener that handles Space configuration - the essential first step for any Flatfile implementation. Our Listener will: * **Respond to Space creation**: When a user creates a new [Space](/core-concepts/spaces), our Listener will automatically configure it * **Define the Blueprint**: Set up a single [Workbook](/core-concepts/workbooks) with a single [Sheet](/core-concepts/sheets) with [Field](/core-concepts/fields) definitions for names and emails that establishes the data schema for the Space * **Handle the complete Job lifecycle**: Acknowledge, update progress, and complete the configuration [Job](/core-concepts/jobs) with proper error handling * **Provide user feedback**: Give real-time updates during the configuration process This forms the foundation that you'll build upon in the next parts of this series, where we'll add user Actions and data validation. By the end of this tutorial, you'll have a working Listener that creates a fully configured workspace ready for data import. ## Prerequisites Before we start coding, you'll need a Flatfile account and a fresh project directory: 1. **Create a new project directory**: Start in a fresh directory for this tutorial (e.g., `mkdir my-flatfile-listener && cd my-flatfile-listener`) 2. **Sign up for Flatfile**: Visit [platform.flatfile.com](https://platform.flatfile.com) and create your free account 3. **Get your credentials**: You'll need your Secret Key and Environment ID from the [Keys & Secrets](https://platform.flatfile.com/dashboard/keys-and-secrets) section later in this tutorial **New to Flatfile?** If you'd like to understand the broader data structure and concepts before diving into code, we recommend reading through the [Core Concepts](/core-concepts/overview) section first. This covers the foundational elements like [Environments](/core-concepts/environments), [Apps](/core-concepts/apps), and [Spaces](/core-concepts/spaces), as well as our data structure like [Workbooks](/core-concepts/workbooks) and [Sheets](/core-concepts/sheets), and how they all work together. Each Listener is deployed to a specific Environment, allowing you to set up separate Environments for development, staging, and production to safely test code changes before deploying to production. ## Install Dependencies Choose your preferred language and follow the setup steps: ```bash JavaScript theme={null} # Initialize project (skip if you already have package.json) npm init -y # Install required Flatfile packages npm install @flatfile/listener @flatfile/api # Note: Feel free to use your preferred JavaScript project setup method instead ``` ```bash TypeScript theme={null} # Initialize project (skip if you already have package.json) npm init -y # Install required Flatfile packages npm install @flatfile/listener @flatfile/api # Install TypeScript dev dependency npm install --save-dev typescript # Initialize TypeScript config (skip if you already have tsconfig.json) npx tsc --init # Note: Feel free to use your preferred TypeScript project setup method instead ``` ### Authentication Setup For this step, you'll need to get your Secret Key and environment ID from your [Flatfile Dashboard](https://platform.flatfile.com/dashboard/keys-and-secrets). Then create a new file called `.env` and add the following (populated with your own values): ```bash theme={null} # .env FLATFILE_API_KEY="your_secret_key" FLATFILE_ENVIRONMENT_ID="us_env_your_environment_id" ``` ## Create Your Listener File Create a new file called `index.js` for Javascript or `index.ts` for TypeScript: ```javascript JavaScript theme={null} import api from "@flatfile/api"; export default function (listener) { // Configure the Space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10, }); // Create the Workbook with Sheets, creating the Blueprint for the space await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75, }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true, }, }); } catch (error) { console.error("Error configuring Space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error.message}`, acknowledge: true, }, }); } }); } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from "@flatfile/listener"; import api from "@flatfile/api"; export default function (listener: FlatfileListener) { // Configure the Space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10 }); // Create the Workbook with Sheets, creating the Blueprint for the space await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75 }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true } }); } catch (error) { console.error("Error configuring Space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error instanceof Error ? error.message : 'Unknown error'}`, acknowledge: true } }); } }); } ``` **Complete Example**: The full working code for this tutorial step is available in our Getting Started repository: [JavaScript](https://github.com/FlatFilers/getting-started/tree/main/101.01-first-listener/javascript) | [TypeScript](https://github.com/FlatFilers/getting-started/tree/main/101.01-first-listener/typescript) ## Project Structure After creating your Listener file, your project directory should look like this: ```text JavaScript theme={null} my-flatfile-listener/ ├── .env // Environment variables ├── index.js // Listener code | | /* Node-specific files below */ | ├── package.json ├── package-lock.json └── node_modules/ ``` ```text TypeScript theme={null} my-flatfile-listener/ ├── .env // Environment variables ├── index.ts // Listener code | | /* Node and Typescript-specific files below */ | ├── package.json ├── package-lock.json ├── tsconfig.json └── node_modules/ ``` ### Authentication Setup You'll need to get your Secret Key and Environment ID from your [Flatfile Dashboard](https://platform.flatfile.com/dashboard/keys-and-secrets) to find both values, then add them to a `.env` file: ```bash theme={null} # .env FLATFILE_API_KEY="your_secret_key" FLATFILE_ENVIRONMENT_ID="us_env_your_environment_id" ``` ## Testing Your Listener ### Local Development To test your Listener locally, you can use the `flatfile develop` command. This will start a local server that implements your custom Listener code, and will also watch for changes to your code and automatically reload the server. ```bash theme={null} # Run locally with file watching npx flatfile develop ``` ### Step-by-Step Testing After running your listener locally: 1. Create a new space in your Flatfile environment 2. Observe as the new space is configured with a Workbook and Sheet ## What Just Happened? Your Listener is now ready to respond to Space configuration Events! Here's how the space configuration works step by step: ### 1. Exporting your Listener function This is the base structure of your Listener. At its core, it's just a function that takes a `listener` object as an argument, and then uses that listener to respond to Events. ```javascript JavaScript theme={null} export default function (listener) { // . . . code } ``` ```typescript TypeScript theme={null} export default function (listener: FlatfileListener) { // . . . code } ``` ### 2. Listen for Space Configuration When a new Space is created, Flatfile automatically triggers a `space:configure` job that your Listener can handle. This code listens for that job using the `job:ready` Event, filtered by the job name `space:configure`. ```javascript JavaScript theme={null} listener.on("job:ready", { job: "space:configure" }, async (event) => { // . . . code }); ``` ```typescript TypeScript theme={null} listener.on("job:ready", { job: "space:configure" }, async (event) => { // . . . code }); ``` ### 3. Acknowledge the Job The first step is always to acknowledge that you've received the job and provide initial feedback to users. From this point on, we're responsible for the rest of the job lifecycle, and we'll be doing it all in this Listener. For more information on Jobs, see the [Jobs](/core-concepts/jobs) concept. ```javascript JavaScript theme={null} await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10, }); ``` ```typescript TypeScript theme={null} await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10 }); ``` ### 4. Define the Blueprint Next, we create the workbook with sheets and field definitions. This **is** your [Blueprint](/core-concepts/blueprints) definition—establishing the data schema that will govern all data within this Space. ```javascript JavaScript theme={null} await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); ``` ```typescript TypeScript theme={null} await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); ``` ### 5. Update Progress Keep users informed about what's happening during the configuration process. ```javascript JavaScript theme={null} await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75, }); ``` ```typescript TypeScript theme={null} await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75 }); ``` ### 6. Complete the Job Finally, mark the job as complete with a success message, or fail it if something went wrong. ```javascript JavaScript theme={null} // Success case await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true, }, }); // Failure case await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error.message}`, acknowledge: true, }, }); ``` ```typescript TypeScript theme={null} // Success case await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true } }); // Failure case await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error instanceof Error ? error.message : 'Unknown error'}`, acknowledge: true } }); ``` This follows the standard Job pattern: **acknowledge → update progress → complete** (or fail on error). This provides users with real-time feedback and ensures robust error handling throughout the configuration process. ## Next Steps Ready to enhance data quality? Continue to [Adding Validation](/coding-tutorial/101-your-first-listener/101.02-adding-validation) to learn how to validate Fields and provide real-time feedback to users. For more detailed information: * Understand Job lifecycle patterns in [Jobs](/core-concepts/jobs) and [Spaces](/core-concepts/spaces) * Learn more about [Events](/reference/events) * Organize your Listeners with [Namespaces](/guides/namespaces-and-filters) --- # Source: https://flatfile.com/docs/coding-tutorial/101-your-first-listener/101.02-adding-validation.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # 02: Adding Validation to Your Listener > Enhance your listener with data validation capabilities to ensure data quality and provide real-time feedback to users. In the [previous guide](/coding-tutorial/101-your-first-listener/101.01-first-listener), we created a Listener that configures Spaces and sets up the data structure. Now we'll add data validation to ensure data quality and provide helpful feedback to users as they work with their data. **Following along?** Download the starting code from our [Getting Started repository](https://github.com/FlatFilers/getting-started/tree/main/101.01-first-listener) and refactor it as we go, or jump directly to the [final version with validation](https://github.com/FlatFilers/getting-started/tree/main/101.02-adding-validation). ## What Is Data Validation? Data validation in Flatfile allows you to: * Check data formats and business rules * Provide warnings and errors to guide users * Ensure data quality before processing * Give real-time feedback during data entry Validation can happen at different levels: * **Field-level**: Validate individual [Field](/core-concepts/fields) values (email format, date ranges, etc.) * **Record-level**: Validate relationships between [Fields](/core-concepts/fields) in a single [Record](/core-concepts/records) * **Sheet-level**: Validate across all [Records](/core-concepts/records) (duplicates, unique constraints, etc.) ## Email Validation Example This example shows how to perform email format validation directly when Records are committed. When users commit their changes, we validate that email addresses have a proper format and provide helpful feedback for any invalid emails. This approach validates Records as they're committed, providing immediate feedback to users. For more complex validations or when you need an object-oriented approach, we recommend using the [Record Hook](/plugins/record-hook) plugin. If you use both [Record Hooks](/plugins/record-hook) and regular listener validators (like this one) on the same sheet, you may encounter race conditions. Record Hooks will clear all existing messages before applying new ones, which can interfere with any messages set elsewhere. We have ways to work around this, but it's a good idea to avoid using both at the same time. ## What Changes We're Making To add validation to our basic Listener, we'll add a listener that triggers when users commit their changes and performs validation directly: ```javascript theme={null} listener.on("commit:created", async (event) => { const { sheetId } = event.context; // Get committed records and validate email format const response = await api.records.get(sheetId); const records = response.data.records; // Email validation logic here... }); ``` ## Complete Example with Validation Here's how to add email validation to your existing Listener: ```javascript JavaScript theme={null} import api from "@flatfile/api"; export default function (listener) { // Configure the space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10, }); // Create the workbook with sheets await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75, }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true, }, }); } catch (error) { console.error("Error configuring space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error.message}`, acknowledge: true, }, }); } }); // Listen for commits and validate email format listener.on("commit:created", async (event) => { const { sheetId } = event.context; try { // Get records from the sheet const response = await api.records.get(sheetId); const records = response.data.records; // Simple email validation regex const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; // Prepare updates for records with invalid emails const updates = []; for (const record of records) { const emailValue = record.values.email?.value; if (emailValue) { const email = emailValue.toLowerCase(); if (!emailRegex.test(email)) { updates.push({ id: record.id, values: { email: { value: email, messages: [ { type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }, ], }, }, }); } } } // Update records with validation messages if (updates.length > 0) { await api.records.update(sheetId, updates); } } catch (error) { console.error("Error during validation:", error); } }); } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from "@flatfile/listener"; import api, { Flatfile } from "@flatfile/api"; export default function (listener: FlatfileListener) { // Configure the space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10 }); // Create the workbook with sheets await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75 }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true } }); } catch (error) { console.error("Error configuring space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error instanceof Error ? error.message : 'Unknown error'}`, acknowledge: true } }); } }); // Listen for commits and validate email format listener.on("commit:created", async (event) => { const { sheetId } = event.context; try { // Get records from the sheet const response = await api.records.get(sheetId); const records = response.data.records; // Simple email validation regex const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; // Prepare updates for records with invalid emails const updates: Flatfile.RecordWithLinks[] = []; for (const record of records) { const emailValue = record.values.email?.value as string; if (emailValue) { const email = emailValue.toLowerCase(); if (!emailRegex.test(email)) { updates.push({ id: record.id, values: { email: { value: email, messages: [{ type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }], }, }, }); } } } // Update records with validation messages if (updates.length > 0) { await api.records.update(sheetId, updates); } } catch (error) { console.error("Error during validation:", error); } }); } ``` **Complete Example**: The full working code for this tutorial step is available in our Getting Started repository: [JavaScript](https://github.com/FlatFilers/getting-started/tree/main/101.02-adding-validation/javascript) | [TypeScript](https://github.com/FlatFilers/getting-started/tree/main/101.02-adding-validation/typescript) ## Testing Your Validation ### Local Development To test your Listener locally, you can use the `flatfile develop` command. This will start a local server that will listen for Events and respond to them, and will also watch for changes to your Listener code and automatically reload the server. ```bash theme={null} # Run locally with file watching npx flatfile develop ``` ### Step-by-Step Testing After running your listener locally: 1. Create a new space in your Flatfile environment 2. Enter an invalid email address in the Email Field 3. See error messages appear on invalid email Fields 4. Fix the emails and see the error messages disappear ## What Just Happened? Your Listener now handles two key Events: 1. **`space:configure`** - Sets up the data structure 2. **`commit:created`** - Validates email format when users commit changes Here's how the email validation works step by step: ### 1. Listen for Commits This listener triggers whenever users save their changes to any sheet in the workbook. ```javascript JavaScript theme={null} listener.on("commit:created", async (event) => { const { sheetId } = event.context; ``` ```typescript TypeScript theme={null} listener.on("commit:created", async (event) => { const { sheetId } = event.context; ``` ### 2. Get the Records We retrieve all records from the sheet to validate them. ```javascript JavaScript theme={null} const response = await api.records.get(sheetId); const records = response.data.records; ``` ```typescript TypeScript theme={null} const response = await api.records.get(sheetId); const records = response.data.records; ``` ### 3. Validate Email Format We use a simple regex pattern to check if each email follows the basic `user@domain.com` format. ```javascript JavaScript theme={null} const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; for (const record of records) { const emailValue = record.values.email?.value; if (emailValue && !emailRegex.test(emailValue.toLowerCase())) { // Add validation error } } ``` ```typescript TypeScript theme={null} const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; for (const record of records) { const emailValue = record.values.email?.value as string; if (emailValue && !emailRegex.test(emailValue.toLowerCase())) { // Add validation error } } ``` ### 4. Add Error Messages For invalid emails, we create an update that adds an error message to that specific field. ```javascript JavaScript theme={null} updates.push({ id: record.id, values: { email: { value: email, messages: [{ type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }], }, }, }); ``` ```typescript TypeScript theme={null} updates.push({ id: record.id, values: { email: { value: email, messages: [{ type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }], }, }, }); ``` You can apply different types of validation messages: * **`info`**: Informational messages (mouseover tooltip) * **`warn`**: Warnings that don't block processing (yellow) * **`error`**: Errors that should be fixed, blocks [Actions](/core-concepts/actions) with the `hasAllValid` constraint (red) ### 5. Update Records Finally, we send all validation messages back to the sheet so users can see the errors. ```javascript JavaScript theme={null} if (updates.length > 0) { await api.records.update(sheetId, updates); } ``` ```typescript TypeScript theme={null} if (updates.length > 0) { await api.records.update(sheetId, updates); } ``` ## Next Steps Ready to make your Listener interactive? Continue to [Adding Actions](/coding-tutorial/101-your-first-listener/101.03-adding-actions) to learn how to handle user submissions and create custom workflows. For more detailed information: * Understand Job lifecycle patterns in [Jobs](/core-concepts/jobs) and [Spaces](/core-concepts/spaces) * Learn more about [Events](/reference/events) * Organize your Listeners with [Namespaces](/guides/namespaces-and-filters) * Explore [plugins](/core-concepts/plugins): [Job Handler](/plugins/job-handler) and [Space Configure](/plugins/space-configure) * Check out [Record Hook](/plugins/record-hook) for simpler Field-level validations --- # Source: https://flatfile.com/docs/coding-tutorial/101-your-first-listener/101.03-adding-actions.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # 03: Adding Actions to Your Listener > Build on your basic Listener by adding user Actions to create interactive data processing workflows. In the [previous guides](/coding-tutorial/101-your-first-listener/101.02-adding-validation), we created a Listener with Space configuration and data validation. Now we'll extend that Listener to handle user Actions, allowing users to submit and process their data. **Following along?** Download the starting code from our [Getting Started repository](https://github.com/FlatFilers/getting-started/tree/main/101.02-adding-validation) and refactor it as we go, or jump directly to the [final version with actions](https://github.com/FlatFilers/getting-started/tree/main/101.03-adding-actions). ## What Are Actions? [Actions](/core-concepts/actions) are interactive buttons that appear in the Flatfile interface, allowing users to trigger custom operations on their data. Common Actions include: * **Submit**: Process your data and POST it to your system via API * **Validate**: Run custom validation rules * **Transform**: Apply data transformations * **Export**: Generate reports or exports For more detail on using Actions, see our [Actions](/guides/using-actions) guide. ## What Changes We're Making To add Actions to our Listener with validation, we need to make two specific changes: ### 1. Add Actions Array to Blueprint Definition In the `space:configure` Listener, we'll add an `actions` array to our Workbook creation. This enhances our [Blueprint](/core-concepts/blueprints) to include interactive elements: ```javascript theme={null} actions: [ { label: "Submit", description: "Send data to destination system", operation: "submitActionForeground", mode: "foreground", }, ] ``` ### 2. Add Action Handler Listener We'll add a new Listener to handle when users click the Submit button: ```javascript theme={null} listener.on( "job:ready", { job: "workbook:submitActionForeground" }, async (event) => { // Handle the action... } ); ``` ## Complete Example with Actions This example builds on the Listener we created in the [previous tutorials](/coding-tutorial/101-your-first-listener/101.02-adding-validation). It includes the complete functionality: Space configuration, email validation, and Actions. ```javascript JavaScript theme={null} import api from "@flatfile/api"; export default function (listener) { // Configure the space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10, }); // Create the Workbook with Sheets and Actions await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], actions: [ { label: "Submit", description: "Send data to destination system", operation: "submitActionForeground", mode: "foreground", primary: true, }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75, }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true, }, }); } catch (error) { console.error("Error configuring space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error.message}`, acknowledge: true, }, }); } }); // Handle when someone clicks Submit listener.on( "job:ready", { job: "workbook:submitActionForeground" }, async (event) => { const { jobId, workbookId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Starting data processing...", progress: 10, }); // Get the data const job = await api.jobs.get(jobId); // Update progress await api.jobs.update(jobId, { info: "Retrieving records...", progress: 30, }); // Get the sheets const { data: sheets } = await api.sheets.list({ workbookId }); // Get and count the records const records = {}; let recordsCount = 0; for (const sheet of sheets) { const { data: { records: sheetRecords }, } = await api.records.get(sheet.id); records[sheet.name] = sheetRecords; recordsCount += sheetRecords.length; } // Update progress await api.jobs.update(jobId, { info: `Processing ${sheets.length} sheets with ${recordsCount} records...`, progress: 60, }); // Process the data (log to console for now) console.log("Processing records:", JSON.stringify(records, null, 2)); // Complete the job await api.jobs.complete(jobId, { outcome: { message: `Successfully processed ${sheets.length} sheets with ${recordsCount} records!`, acknowledge: true, }, }); } catch (error) { console.error("Error processing data:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Data processing failed: ${error.message}`, acknowledge: true, }, }); } }, ); // Listen for commits and validate email format listener.on("commit:created", async (event) => { const { sheetId } = event.context; try { // Get records from the sheet const response = await api.records.get(sheetId); const records = response.data.records; // Simple email validation regex const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; // Prepare updates for records with invalid emails const updates = []; for (const record of records) { const emailValue = record.values.email?.value; if (emailValue) { const email = emailValue.toLowerCase(); if (!emailRegex.test(email)) { updates.push({ id: record.id, values: { email: { value: email, messages: [ { type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }, ], }, }, }); } } } // Update records with validation messages if (updates.length > 0) { await api.records.update(sheetId, updates); } } catch (error) { console.error("Error during validation:", error); } }); } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from "@flatfile/listener"; import api, { Flatfile } from "@flatfile/api"; export default function (listener: FlatfileListener) { // Configure the space when it's created listener.on("job:ready", { job: "space:configure" }, async (event) => { const { jobId, spaceId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Setting up your workspace...", progress: 10 }); // Create the Workbook with Sheets and Actions await api.workbooks.create({ spaceId, name: "My Workbook", sheets: [ { name: "contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Full Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], actions: [ { label: "Submit", description: "Send data to destination system", operation: "submitActionForeground", mode: "foreground", primary: true, }, ], }); // Update progress await api.jobs.update(jobId, { info: "Workbook created successfully", progress: 75 }); // Complete the job await api.jobs.complete(jobId, { outcome: { message: "Workspace configured successfully!", acknowledge: true } }); } catch (error) { console.error("Error configuring space:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Failed to configure workspace: ${error instanceof Error ? error.message : 'Unknown error'}`, acknowledge: true } }); } }); // Handle when someone clicks Submit listener.on( "job:ready", { job: "workbook:submitActionForeground" }, async (event) => { const { jobId, workbookId } = event.context; try { // Acknowledge the job await api.jobs.ack(jobId, { info: "Starting data processing...", progress: 10 }); // Get the data const job = await api.jobs.get(jobId); // Update progress await api.jobs.update(jobId, { info: "Retrieving records...", progress: 30 }); // Get the sheets const { data: sheets } = await api.sheets.list({ workbookId }); // Get and count the records const records: { [name: string]: any[] } = {}; let recordsCount = 0; for (const sheet of sheets) { const { data: { records: sheetRecords}} = await api.records.get(sheet.id); records[sheet.name] = sheetRecords; recordsCount += sheetRecords.length; } // Update progress await api.jobs.update(jobId, { info: `Processing ${sheets.length} sheets with ${recordsCount} records...`, progress: 60 }); // Process the data (log to console for now) console.log("Processing records:", JSON.stringify(records, null, 2)); // Complete the job await api.jobs.complete(jobId, { outcome: { message: `Successfully processed ${sheets.length} sheets with ${recordsCount} records!`, acknowledge: true } }); } catch (error) { console.error("Error processing data:", error); // Fail the job if something goes wrong await api.jobs.fail(jobId, { outcome: { message: `Data processing failed: ${error instanceof Error ? error.message : 'Unknown error'}`, acknowledge: true } }); } } ); // Listen for commits and validate email format listener.on("commit:created", async (event) => { const { sheetId } = event.context; try { // Get records from the sheet const response = await api.records.get(sheetId); const records = response.data.records; // Simple email validation regex const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; // Prepare updates for records with invalid emails const updates: Flatfile.RecordWithLinks[] = []; for (const record of records) { const emailValue = record.values.email?.value as string; if (emailValue) { const email = emailValue.toLowerCase(); if (!emailRegex.test(email)) { updates.push({ id: record.id, values: { email: { value: email, messages: [{ type: "error", message: "Please enter a valid email address (e.g., user@example.com)", }], }, }, }); } } } // Update records with validation messages if (updates.length > 0) { await api.records.update(sheetId, updates); } } catch (error) { console.error("Error during validation:", error); } }); } ``` **Complete Example**: The full working code for this tutorial step is available in our Getting Started repository: [JavaScript](https://github.com/FlatFilers/getting-started/tree/main/101.03-adding-actions/javascript) | [TypeScript](https://github.com/FlatFilers/getting-started/tree/main/101.03-adding-actions/typescript) ## Understanding Action Modes Actions can run in different modes: * **`foreground`**: Runs immediately with real-time progress updates (good for quick operations) * **`background`**: Runs as a background job (good for longer operations) The Action operation name (`submitActionForeground`) determines which Listener will handle the Action. ## Testing Your Action ### Local Development To test your Listener locally, you can use the `flatfile develop` command. This will start a local server that will listen for Events and respond to them, and will also watch for changes to your Listener code and automatically reload the server. ```bash theme={null} # Run locally with file watching npx flatfile develop ``` ### Step-by-Step Testing After running your listener locally: 1. Create a new Space in your Flatfile Environment 2. Upload (or manually enter) some data to the contacts Sheet with both valid and invalid email addresses 3. See validation errors appear on invalid email Fields 4. Click the "Submit" button 5. Watch the logging in the terminal as your data is processed and the job is completed ## What Just Happened? Your Listener now handles three key Events and provides a complete data import workflow. Here's how the new action handling works: ### 1. Blueprint Definition with Actions We enhanced the [Blueprint](/core-concepts/blueprints) definition to include action buttons that users can interact with. Adding actions to your workbook configuration is part of defining your Blueprint. ```javascript JavaScript theme={null} actions: [ { label: "Submit", description: "Send data to destination system", operation: "submitActionForeground", mode: "foreground", primary: true, }, ] ``` ```typescript TypeScript theme={null} actions: [ { label: "Submit", description: "Send data to destination system", operation: "submitActionForeground", mode: "foreground", primary: true, }, ] ``` ### 2. Listen for Action Events When users click the Submit button, Flatfile triggers a [Job](/core-concepts/jobs) that your listener can handle using the same approach we used for the `space:configure` job in [101.01](/coding-tutorial/101-your-first-listener/101.01-first-listener#2-listen-for-space-configuration). Jobs are named with the pattern `:`. In this case, the domain is `workbook` since we've mounted the Action to the Workbook blueprint, and the operation is `submitActionForeground` as defined in the Action definition. ```javascript JavaScript theme={null} listener.on( "job:ready", { job: "workbook:submitActionForeground" }, async (event) => { const { jobId, workbookId } = event.context; ``` ```typescript TypeScript theme={null} listener.on( "job:ready", { job: "workbook:submitActionForeground" }, async (event) => { const { jobId, workbookId } = event.context; ``` ### 3. Retrieve and Process Data Get all the data from the workbook and process it according to your business logic. ```javascript JavaScript theme={null} // Get the sheets const { data: sheets } = await api.sheets.list({ workbookId }); // Get and count the records const records = {}; let recordsCount = 0; for (const sheet of sheets) { const { data: { records: sheetRecords } } = await api.records.get(sheet.id); records[sheet.name] = sheetRecords; recordsCount += sheetRecords.length; } ``` ```typescript TypeScript theme={null} // Get the sheets const { data: sheets } = await api.sheets.list({ workbookId }); // Get and count the records const records: { [name: string]: any[] } = {}; let recordsCount = 0; for (const sheet of sheets) { const { data: { records: sheetRecords }} = await api.records.get(sheet.id); records[sheet.name] = sheetRecords; recordsCount += sheetRecords.length; } ``` ### 4. Provide User Feedback Keep users informed about the processing with progress updates and final results. ```javascript JavaScript theme={null} // Update progress during processing await api.jobs.update(jobId, { info: `Processing ${sheets.length} sheets with ${recordsCount} records...`, progress: 60, }); // Complete with success message await api.jobs.complete(jobId, { outcome: { message: `Successfully processed ${sheets.length} sheets with ${recordsCount} records!`, acknowledge: true, }, }); ``` ```typescript TypeScript theme={null} // Update progress during processing await api.jobs.update(jobId, { info: `Processing ${sheets.length} sheets with ${recordsCount} records...`, progress: 60 }); // Complete with success message await api.jobs.complete(jobId, { outcome: { message: `Successfully processed ${sheets.length} sheets with ${recordsCount} records!`, acknowledge: true } }); ``` Your complete Listener now handles: * **`space:configure`** - Defines the Blueprint with interactive actions * **`commit:created`** - Validates email format when users commit changes * **`workbook:submitActionForeground`** - Processes data when users click Submit The Action follows the same Job lifecycle pattern: **acknowledge → update progress → complete** (or fail on error). This provides users with real-time feedback during data processing, while validation ensures data quality throughout the import process. ## Next Steps Congratulations! You now have a complete Listener that handles Space configuration, data validation, and user Actions. For more detailed information: * Learn more about [Actions](/guides/using-actions) * Understand Job lifecycle patterns in [Jobs](/core-concepts/jobs) * Learn more about [Events](/reference/events) * Organize your Listeners with [Namespaces](/guides/namespaces-and-filters) * Explore [plugins](/core-concepts/plugins): [Job Handler](/plugins/job-handler) and [Space Configure](/plugins/space-configure) --- # Source: https://flatfile.com/docs/guides/accepting-additional-fields.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Accepting Additional Fields > Create additional fields on the fly The `allowAdditionalFields` feature offers a fluid integration experience, allowing users to effortlessly map to new or unconfigured fields in your Blueprints. ## How it works * By enabling `allowAdditionalFields`, your Sheet isn't restricted to the initial configuration. It can adapt to include new fields, whether they're anticipated or not. * These supplementary fields can either be added through API calls or input directly by users during the file import process. * To ensure clarity, any field that wasn't part of the original Blueprint configuration is flagged with a `treatment` property labeled `user-defined`. * When adding a custom field, there's no need to fuss over naming the field. The system intuitively adopts the header name from the imported file, streamlining the process. In essence, the `allowAdditionalFields` feature is designed for scalability and ease, ensuring your Blueprints are always ready for unexpected data fields. ## Example Blueprint w/ `allowAdditionalFields` ```json theme={null} { "sheets": [ { "name": "Contacts", "slug": "contacts", "allowAdditionalFields": true, "fields": [ { "key": "firstName", "label": "First Name", "type": "string" }, { "key": "lastName", "label": "Last Name", "type": "string" }, { "key": "email", "label": "Email", "type": "string" } ] } ] } ``` --- # Source: https://flatfile.com/docs/core-concepts/actions.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Actions > User-triggered operations in Flatfile An Action is a code-based operation that runs when a user clicks a button or menu item in Flatfile. Actions can be mounted on [Sheets](/core-concepts/sheets), [Workbooks](/core-concepts/workbooks), [Documents](/core-concepts/documents), or Files to trigger custom operations. Defining a custom Action is a two-step process: 1. Define an Action in your Flatfile blueprint or in your code 2. Create a [Listener](/core-concepts/listeners) to handle the Action When an Action is triggered, it creates a [Job](/core-concepts/jobs) that your application can listen for and respond to. Given that Actions are powered by Jobs, the [Jobs Lifecycle](/core-concepts/jobs#jobs-lifecycle) pertains to Actions as well. This means that you can [update progress values/messages](/core-concepts/jobs#updating-job-progress) while an Action is processing, and when it's done you can provide an [Outcome](/core-concepts/jobs#job-outcomes), which allows you to show a success message, automatically [download a generated file](/core-concepts/jobs#file-downloads), or [forward the user](/core-concepts/jobs#internal-navigation) to a generated Document. For complete implementation details, see our [Using Actions guide](/guides/using-actions). ## Types of Actions ### Built-in Actions Resources in Flatfile come with severaldefault built-in actions like: * Export/download data * Delete data or files * Find and replace (Sheets) ### Developer-Created Actions You can create custom Actions to handle operations specific to your workflow, such as: * Sending data to your API when data is ready * Downloading your data in a specific format * Validating data against external systems * Moving data between different resources * Custom data validations andtransformations ## Where Actions Appear Actions appear in different parts of the UI depending on where they're mounted: * **Workbook Actions**: Buttons in the top-right corner of Workbooks * **Sheet Actions**: Dropdown menu in the Sheet toolbar (or top-level button if marked as `primary`) * **Document Actions**: Buttons in the top-right corner of Documents * **File Actions**: Dropdown menu for each file in the Files list ## Example Action Configuration Every Action requires an `operation` (unique identifier) and `label` (display text): ```javascript theme={null} { operation: "submitActionBg", mode: "background", label: "Submit", type: "string", description: "Submit this data to a webhook.", primary: true, }, ``` Actions support additional options like `primary` status, confirmation dialogs, constraints, and input forms. See the [Using Actions guide](/guides/using-actions) for more details. --- # Source: https://flatfile.com/docs/embedding/advanced-configuration.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Advanced Configuration > Complete configuration reference for embedded Flatfile This reference covers all configuration options for embedded Flatfile, from basic setup to advanced customization. ## Authentication & Security ### publishableKey Your publishable key authenticates your application with Flatfile. This key is safe to include in client-side code. **Where to find it:** 1. Log into [Platform Dashboard](https://platform.flatfile.com) 2. Navigate to **Developer Settings** → **API Keys** 3. Copy your **Publishable Key** (starts with `pk_`) ```javascript theme={null} // Example usage const config = { publishableKey: "pk_1234567890abcdef", // Your actual key }; ``` ### Security Best Practices #### Environment Variables Store your publishable key in environment variables rather than hardcoding: ```javascript theme={null} // ✅ Good - using environment variable const config = { publishableKey: process.env.REACT_APP_FLATFILE_KEY, }; // ❌ Avoid - hardcoded keys const config = { publishableKey: "pk_1234567890abcdef", }; ``` ## Common Configuration Options These options are shared across all SDK implementations: ### Authentication | Option | Type | Required | Description | | ---------------- | ------ | -------- | -------------------------------------------- | | `publishableKey` | string | ✅ | Your publishable key from Platform Dashboard | ### User Identity | Option | Type | Required | Description | | ---------------------- | ------ | -------- | ------------------------------------------------------------------------------- | | `userInfo` | object | ❌ | User metadata for space creation | | `userInfo.userId` | string | ❌ | Unique user identifier | | `userInfo.name` | string | ❌ | User's display name - this is displayed in the dashboard as the associated user | | `userInfo.companyId` | string | ❌ | Company identifier | | `userInfo.companyName` | string | ❌ | Company display name | | `externalActorId` | string | ❌ | Unique identifier for embedded users | ### Space Setup | Option | Type | Required | Description | | --------------- | -------- | -------- | ----------------------------------------- | | `name` | string | ✅ | Name of the space | | `environmentId` | string | ✅ | Environment identifier | | `spaceId` | string | ❌ | ID of existing space to reuse | | `workbook` | object | ❌ | Workbook configuration for dynamic spaces | | `listener` | Listener | ❌ | Event listener for responding to events | ### Look & Feel | Option | Type | Required | Description | | --------------------------------- | ------- | -------- | -------------------------------------------------------------------- | | `themeConfig` | object | ❌ | Theme values for Space, sidebar and data table | | `spaceBody` | object | ❌ | Space options for creating new Space; used with Angular and Vue SDKs | | `sidebarConfig` | object | ❌ | Sidebar UI configuration | | `sidebarConfig.defaultPage` | object | ❌ | Landing page configuration | | `sidebarConfig.showDataChecklist` | boolean | ❌ | Toggle data config, defaults to false | | `sidebarConfig.showSidebar` | boolean | ❌ | Show/hide sidebar | | `document` | object | ❌ | Document content for space | | `document.title` | string | ❌ | Document title | | `document.body` | string | ❌ | Document body content (markdown) | ### CSS Customization You can customize the embedded Flatfile iframe and its container elements using CSS variables and class selectors. This allows you to control colors, sizing, borders, and other visual aspects of the iframe wrapper to match your application's design. #### CSS Variables Define these CSS variables in your application's stylesheet to control the appearance of Flatfile's embedded components: ```css theme={null} :root { --ff-primary-color: #4c48ef; --ff-secondary-color: #616a7d; --ff-text-color: #090b2b; --ff-dialog-border-radius: 4px; --ff-border-radius: 5px; --ff-bg-fade: rgba(0, 0, 0, 0.2); } ``` #### Container Elements Target these elements to customize the iframe container: ```css theme={null} /* The default mount element */ #flatfile_iFrameContainer { /* Your custom styles */ } /* A div around the iframe that contains Flatfile */ .flatfile_iframe-wrapper { /* Your custom styles */ } /* The actual iframe that contains Flatfile */ #flatfile_iframe { /* Your custom styles */ } ``` #### Modal Display Customization When `displayAsModal` is set to `true`, customize the modal appearance: ```css theme={null} /* Container styles when displayed as modal */ .flatfile_displayAsModal { padding: 50px !important; width: calc(100% - 100px) !important; height: calc(100vh - 100px) !important; } .flatfile_iframe-wrapper.flatfile_displayAsModal { background: var(--ff-bg-fade); } /* Close button styles */ .flatfile_displayAsModal .flatfile-close-button { /* Your custom styles */ } .flatfile_displayAsModal .flatfile-close-button svg { fill: var(--ff-secondary-color); } /* Iframe border radius when displayed as modal */ .flatfile_displayAsModal #flatfile_iframe { border-radius: var(--ff-border-radius); } ``` #### Exit Confirmation Dialog Customize the confirmation dialog that appears when closing Flatfile: ```css theme={null} /* Modal backdrop */ .flatfile_outer-shell { background-color: var(--ff-bg-fade); border-radius: var(--ff-border-radius); } /* Inner container */ .flatfile_inner-shell { /* Your custom styles */ } /* Dialog box */ .flatfile_modal { border-radius: var(--ff-dialog-border-radius); } /* Button container */ .flatfile_button-group { /* Your custom styles */ } /* All buttons */ .flatfile_button { /* Your custom styles */ } /* Primary "Yes, cancel" button */ .flatfile_primary { border: 1px solid var(--ff-primary-color); background-color: var(--ff-primary-color); color: #fff; } /* Secondary "No, stay" button */ .flatfile_secondary { color: var(--ff-secondary-color); } /* Dialog heading */ .flatfile_modal-heading { color: var(--ff-text-color); } /* Dialog description text */ .flatfile_modal-text { color: var(--ff-secondary-color); } ``` #### Error Component Customize the error display component: ```css theme={null} /* Error container */ .ff_error_container { /* Your custom styles */ } /* Error heading */ .ff_error_heading { /* Your custom styles */ } /* Error description */ .ff_error_text { /* Your custom styles */ } ``` ### Basic Behavior | Option | Type | Required | Description | | ---------------------- | -------- | -------- | ------------------------------------------ | | `closeSpace` | object | ❌ | Options for closing iframe | | `closeSpace.operation` | string | ❌ | Operation type | | `closeSpace.onClose` | function | ❌ | Callback when space closes | | `displayAsModal` | boolean | ❌ | Display as modal or inline (default: true) | ## Advanced Configuration Options These options provide specialized functionality for custom implementations: ### Space Reuse | Option | Type | Required | Description | | ------------- | ------ | -------- | --------------------------------------------- | | `id` | string | ✅ | Space ID | | `accessToken` | string | ✅ | Access token for space (obtained server-side) | **Important:** To reuse an existing space, you must retrieve the spaceId and access token server-side using your secret key, then pass the `accessToken` to the client. See [Server Setup Guide](./server-setup) for details. ### UI Overrides | Option | Type | Required | Description | | ------------------------- | ------------ | -------- | ---------------------------------------------------------------- | | `mountElement` | string | ❌ | Element to mount Flatfile (default: "flatfile\_iFrameContainer") | | `loading` | ReactElement | ❌ | Custom loading component | | `exitTitle` | string | ❌ | Exit dialog title (default: "Close Window") | | `exitText` | string | ❌ | Exit dialog text (default: "See below") | | `exitPrimaryButtonText` | string | ❌ | Primary button text (default: "Yes, exit") | | `exitSecondaryButtonText` | string | ❌ | Secondary button text (default: "No, stay") | | `errorTitle` | string | ❌ | Error dialog title (default: "Something went wrong") | ### On-Premises Configuration | Option | Type | Required | Description | | ---------- | ------ | -------- | ------------------------------------------------------------------------------------------------ | | `apiUrl` | string | ❌ | API endpoint (default: "[https://platform.flatfile.com/api](https://platform.flatfile.com/api)") | | `spaceUrl` | string | ❌ | Spaces API URL (default: "[https://platform.flatfile.com/s](https://platform.flatfile.com/s)") | URLs for other regions can be found [here](../reference/cli#regional-servers). ## Configuration Examples ### Basic Space Creation ```javascript theme={null} const config = { publishableKey: "pk_1234567890abcdef", name: "Customer Data Import", environmentId: "us_env_abc123", workbook: { // your workbook configuration }, userInfo: { userId: "user_123", name: "John Doe", }, }; ``` ### Space Reuse with Access Token ```javascript theme={null} // Client-side: Use space with access token from server const config = { space: { id: "us_sp_abc123def456", accessToken: "at_1234567890abcdef", // Retrieved server-side }, }; ``` ### Advanced UI Customization ```javascript theme={null} const config = { publishableKey: "pk_1234567890abcdef", mountElement: "custom-flatfile-container", exitTitle: "Are you sure you want to leave?", exitText: "Your progress will be saved.", themeConfig: { // custom theme configuration }, }; ``` ## Troubleshooting ### Invalid publishableKey **Error:** `"Invalid publishable key"` **Solution:** * Verify key starts with `pk_` * Check for typos or extra spaces * Ensure key is from correct environment ### Space Not Found **Error:** `"Space not found"` or `403 Forbidden` **Solution:** * Verify Space ID format (`us_sp_` prefix) * Ensure Space exists and is active * Check Space permissions in dashboard ### CORS Issues **Error:** `"CORS policy blocked"` **Solution:** * Add your domain to allowed origins in Platform Dashboard * Ensure you're using publishable key (not secret key) * Check browser network tab for specific CORS errors ### Access Token Issues **Error:** `"Invalid access token"` when using space reuse **Solution:** * Ensure access token is retrieved server-side using secret key * Check that token hasn't expired * Verify space ID matches the token ## Testing Setup For development and testing: ```javascript theme={null} // Development configuration const config = { publishableKey: "pk_test_1234567890abcdef", // publishable key from development environment }; ``` Create separate test Spaces for development to avoid affecting production data. ## Next Steps Once configured: * Deploy your event listener to Flatfile * Configure data validation and transformation rules * Test the embedding in your application * Deploy to production with production keys For server-side space reuse patterns, see our [Server Setup Guide](./server-setup). --- # Source: https://flatfile.com/docs/guides/advanced-filters.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Advanced Filters > Learn how to use Flatfile's Advanced Filters to efficiently filter and search through your data Advanced Filters in Flatfile provide a powerful way to filter and search through your data with complex conditions. This feature allows you to create sophisticated filter combinations to quickly find the exact records you need. ## Overview The Advanced Filters feature enables you to: * Create multiple filter conditions with different fields * Combine conditions using logical operators (AND/OR) * Filter by various data types with appropriate operators * Save and reuse filter combinations * Apply filters to large datasets efficiently ## Using Advanced Filters ### Accessing Advanced Filters You can access Advanced Filters in the Flatfile interface through the Filter button in the sheet toolbar: 1. Navigate to any sheet in your workbook 2. Click the "Filter" button in the toolbar 3. Select a field to filter by, or click "Advanced filter" to create a complex filter ### Creating Filter Conditions Each filter condition consists of three parts: 1. **Field** - The column you want to filter on 2. **Operator** - The comparison type (equals, contains, greater than, etc.) 3. **Value** - The specific value to filter by For example, you might create a filter like: `firstName is "John"` or `age > 30`. ### Combining Multiple Filters Advanced Filters allow you to combine multiple conditions: 1. Create your first filter condition 2. Click the "Add condition" button 3. Select whether to join with "AND" or "OR" logic 4. Add your next condition This allows for complex queries like: `firstName is "John" AND age > 30` or `status is "pending" OR status is "review"`. ### Available Operators Different field types support different operators: | Field Type | Available Operators | | ---------- | ----------------------------------------------- | | String | is, is not, like, is empty, not empty | | Number | is, is not, >, \<, >=, \<=, is empty, not empty | | Boolean | is true, is false, is empty, not empty | | Date | is, is not, >, \<, >=, \<=, is empty, not empty | | Enum | is, is not, is empty, not empty | ### Horizontal Scrolling When you add multiple filter conditions that extend beyond the available width of the screen, the filter area will automatically enable horizontal scrolling. This allows you to create complex filter combinations without being limited by screen space. Simply scroll horizontally to see all your filter conditions when they extend beyond the visible area. ## Advanced Filter Examples Here are some examples of how you might use Advanced Filters: ### Example 1: Finding Specific Customer Records ``` firstName is "Sarah" AND status is "active" AND lastPurchase > "2023-01-01" ``` This filter would show all active customers named Sarah who made a purchase after January 1, 2023. ### Example 2: Identifying Records Needing Attention ``` (status is "pending" OR status is "review") AND createdDate < "2023-06-01" ``` This filter would show all records that are either pending or in review, and were created before June 1, 2023. ### Example 3: Finding Missing Data ``` email is not empty AND phone is empty ``` This filter would show all records that have an email address but are missing a phone number. ## Best Practices * **Start simple**: Begin with a single filter condition and add more as needed * **Use AND/OR strategically**: "AND" narrows results (both conditions must be true), while "OR" broadens results (either condition can be true) * **Consider performance**: Very complex filters on large datasets may take longer to process * **Save common filters**: If you frequently use the same filter combinations, consider saving them as views ## Troubleshooting If you encounter issues with Advanced Filters: * Ensure your filter values match the expected format for the field type * Check that you're using appropriate operators for each field type * For complex filters, try breaking them down into simpler components to identify issues * Verify that the data you're filtering actually exists in your dataset Advanced Filters provide a powerful way to work with your data in Flatfile, allowing you to quickly find and focus on the records that matter most to your workflow. --- # Source: https://flatfile.com/docs/embedding/angular.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Angular Embedding > Embed Flatfile in Angular applications Embed Flatfile in your Angular application using our Angular SDK. This provides Angular components and services for seamless integration. ## Installation ```bash theme={null} npm install @flatfile/angular-sdk ``` ## Basic Implementation ### 1. Import the Module Add the `SpaceModule` to your Angular module: ```typescript theme={null} import { NgModule } from "@angular/core"; import { SpaceModule } from "@flatfile/angular-sdk"; @NgModule({ imports: [ SpaceModule, // your other imports ], // ... }) export class AppModule {} ``` ### 2. Create Component Create a component to handle the Flatfile embed: ```typescript theme={null} import { Component } from "@angular/core"; import { SpaceService, ISpace } from "@flatfile/angular-sdk"; @Component({ selector: "app-import", template: `

Welcome to our app

`, }) export class ImportComponent { constructor(private spaceService: SpaceService) {} spaceProps: ISpace = { publishableKey: "pk_your_publishable_key", displayAsModal: true, }; openFlatfile() { this.spaceService.OpenEmbed(this.spaceProps); } } ``` ### 3. Get Your Credentials **publishableKey**: Get from [Platform Dashboard](https://platform.flatfile.com) → Developer Settings **Authentication & Security**: For production applications, implement proper authentication and space management on your server. See [Advanced Configuration](./advanced-configuration) for authentication guidance. ## Complete Example The example below will open an empty space. To create the sheet your users should land on, you'll want to create a workbook as shown further down this page. ```typescript theme={null} // app.module.ts import { NgModule } from "@angular/core"; import { BrowserModule } from "@angular/platform-browser"; import { SpaceModule } from "@flatfile/angular-sdk"; import { AppComponent } from "./app.component"; @NgModule({ declarations: [AppComponent], imports: [BrowserModule, SpaceModule], providers: [], bootstrap: [AppComponent], }) export class AppModule {} ``` ```typescript theme={null} // app.component.ts import { Component } from "@angular/core"; import { SpaceService, ISpace } from "@flatfile/angular-sdk"; @Component({ selector: "app-root", template: `

My Application

`, }) export class AppComponent { constructor(private spaceService: SpaceService) {} spaceProps: ISpace = { publishableKey: "pk_your_publishable_key", displayAsModal: true, }; openFlatfile() { this.spaceService.OpenEmbed(this.spaceProps); } } ``` ## Creating New Spaces To create a new Space each time: 1. Add a `workbook` configuration object. Read more about workbooks [here](../core-concepts/workbooks). 2. Optionally [deploy](../core-concepts/listeners) a `listener` for custom data processing. Your listener will contain your validations and transformations ```typescript theme={null} spaceProps: ISpace = { publishableKey: "pk_your_publishable_key", workbook: { name: "My Import", sheets: [ { name: "Contacts", slug: "contacts", fields: [ { key: "name", type: "string", label: "Name" }, { key: "email", type: "string", label: "Email" }, ], }, ], }, displayAsModal: true, }; ``` For detailed workbook configuration, see the [Workbook API Reference](https://reference.flatfile.com/api-reference/workbooks). ## Reusing Existing Spaces For production applications, implement proper space management on your server to ensure security and proper access control: ```typescript theme={null} // Frontend Component @Component({ selector: "app-import", template: `
`, }) export class ImportComponent { loading = false; constructor(private spaceService: SpaceService, private http: HttpClient) {} async openFlatfile() { this.loading = true; try { // Get space credentials from your server const response = await this.http .get<{ publishableKey: string; spaceId: string; accessToken?: string; }>("/api/flatfile/space") .toPromise(); const spaceProps: ISpace = { space: { spaceId: response.spaceId, accessToken: response.accessToken, }, displayAsModal: true, }; this.spaceService.OpenEmbed(spaceProps); } catch (error) { console.error("Failed to load Flatfile space:", error); } finally { this.loading = false; } } } ``` For server implementation details, see the [Server Setup](/embedding/server-setup) guide. ## Configuration Options For detailed configuration options, authentication settings, and advanced features, see the [Advanced Configuration](./advanced-configuration) guide. ## Using Space Component Directly You can also use the `flatfile-space` component directly in your template: ```typescript theme={null} @Component({ selector: "app-import", template: ` `, }) export class ImportComponent { showSpace = false; spaceProps: ISpace = { publishableKey: "pk_your_publishable_key", displayAsModal: true, }; toggleSpace() { this.showSpace = !this.showSpace; } onCloseSpace() { this.showSpace = false; } } ``` ## TypeScript Support The Angular SDK is built with TypeScript and includes full type definitions: ```typescript theme={null} import { ISpace, SpaceService } from "@flatfile/angular-sdk"; interface ImportData { name: string; email: string; } @Component({ // component definition }) export class ImportComponent { spaceProps: ISpace; constructor(private spaceService: SpaceService) { this.spaceProps = { publishableKey: "pk_your_publishable_key", spaceId: "us_sp_your_space_id", }; } } ``` ## Next Steps * **Advanced Configuration**: Set up [authentication, listeners, and advanced options](./advanced-configuration) * **Server Setup**: Implement [backend integration and space management](./server-setup) * **Data Processing**: Set up Listeners in your Space for custom data transformations * **API Integration**: Use [Flatfile API](https://reference.flatfile.com) to retrieve processed data * **Angular SDK Documentation**: See [@flatfile/angular-sdk documentation](https://www.npmjs.com/package/@flatfile/angular-sdk) ## Quick Links Authentication, listeners, and advanced options Backend integration and space management ## Example Projects Complete Angular application with Flatfile embedding --- # Source: https://flatfile.com/docs/core-concepts/apps.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Apps > The anatomy of an App ## Apps Apps are an organizational unit in Flatfile, designed to manage and coordinate data import workflows across different environments. They serve as containers for organizing related Spaces and provide a consistent configuration that can be deployed across your development pipeline. Apps can be given [namespaces](/guides/namespaces-and-filters#app-namespaces) to isolate different parts of your application and control which [listeners](/core-concepts/listeners) receive events from which spaces. Apps are available across Development-level environments by default, and optionally available across Production environments with a configuration option: --- # Source: https://flatfile.com/docs/guides/authentication.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Authentication and Authorization > Complete guide to authenticating with Flatfile using API keys, Personal Access Tokens, and managing roles and permissions This guide covers all aspects of authentication with Flatfile, including API keys, Personal Access Tokens, and role-based access control for your team and customers. ## API Keys Flatfile provides two different kinds of environment-specific API keys you can use to interact with the API. In addition, you can work with a development key or a production environment. API keys are created automatically. Use the [API Keys and Secrets](https://platform.flatfile.com/dashboard/keys-and-secrets) page to see your API keys for any given Environment. ### Testing and Development [Environments](/core-concepts/environments) are isolated entities and are intended to be a safe place to create and test different configurations. A `development` and `production` environment are created by default. | isProd | Name | Description | | ------- | ------------- | ------------------------------------------------------------------------------------------- | | *false* | `development` | Use this default environment, and its associated test API keys, as you build with Flatfile. | | *true* | `production` | When you're ready to launch, create a new environment and swap out your keys. | The development environment does not count towards your paid credits. ### Secret and Publishable Keys All Accounts have two key types for each environment. Learn when to use each type of key: | Type | Id | Description | | --------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------- | | Secret key | `sk_23ghsyuyshs7dcrty` | **On the server-side:** Store this securely in your server-side code. Don't expose this key in an application. | | Publishable key | `pk_23ghsyuyshs7dcert` | **On the client-side:** Can be publicly-accessible in your application's client-side code. Use when embedding Flatfile. | The `accessToken` provided from `publishableKey` will remain valid for a duration of 24 hours. ## Personal Access Tokens Personal Access Tokens (PATs) provide a secure way to authenticate with the Flatfile API. Unlike environment-specific API keys, PATs are user-scoped tokens that inherit the permissions of the user who created them. Personal Access Tokens: * Are user-scoped authentication tokens * Have the same auth scope as the user who created them * Can be used in place of a JWT for API authentication * Are ideal for scripts, automation, and integrations that need to act on behalf of a user This opens up possibilities for various use cases, including building audit logs, managing Spaces, and monitoring agents across environments. ### Creating a Token 1. Log in to your Flatfile account 2. Click on your user profile dropdown in the top-right corner 3. Select "Personal Access Tokens" 4. Click "Create Token" 5. Enter a descriptive name for your token 6. Copy the generated token immediately - it will only be shown once Make sure to copy your token when it's first created. For security reasons, you won't be able to view the token again after leaving the page. ### Exchanging Credentials for an Access Token You can exchange your email and password credentials for an access token using the auth endpoint. See the [Authentication Examples](/guides/deeper/auth-examples#creating-a-pat-via-api) for the complete API call. The response will include an access token that you can use for API authentication. ### Retrieving a Personal Access Token (Legacy Method) Your `publishableKey` and `secretKey` are specific to an environment. Therefore, to interact at a higher level, you can use a personal access token. 1. From the dashboard, open **Settings** 2. Click to **Personal Tokens** 3. Retrieve your `clientId` and `secret`. 4. Using the key pair, call the auth endpoint. See the [Authentication Examples](/guides/deeper/auth-examples#legacy-client-credentials-flow) for the complete API call. 5. The response will include an `accessToken`. Present that as your **Bearer `token`** in place of the `secretKey`. ### Using a Token Use your Personal Access Token in API requests by including it in the Authorization header as documented in the [API Reference](https://reference.flatfile.com). ### Managing Tokens You can view all your active tokens in the Personal Access Tokens page. For each token, you can see: * Name * Creation date * Last used date (if applicable) To delete a token: 1. Navigate to the Personal Access Tokens page 2. Find the token you want to delete 3. Click the menu icon (three dots) next to the token 4. Select "Delete" 5. Confirm the deletion Deleting a token immediately revokes access for any applications or scripts using it. Make sure you update any dependent systems before deleting a token. ### Best Practices * Create separate tokens for different applications or use cases * Use descriptive names that identify where the token will be used * Regularly review and delete unused tokens * Rotate tokens periodically for enhanced security * Never share your tokens with others - each user should create their own tokens ### Example Use Cases #### Building an Audit Log Query for all events across all environments and combine them with user and guest data to create a comprehensive audit log, providing a detailed history of actions within the application. #### Managing Spaces Across Environments Determine the number of Spaces available and identify which Spaces exist in different environments, allowing you to efficiently manage and organize your data. #### Monitoring Agents Across Environments Keep track of agents deployed to various environments by retrieving information about their presence, ensuring smooth and efficient data import processes. ## Roles & Permissions Grant your team and customers access with role-based permissions. ### Administrator Roles Administrator roles have full access to your accounts, including inviting additional admins and seeing developer keys. The `accessToken` provided will remain valid for a duration of 24 hours. | Role | Details | | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Administrator | This role is meant for any member of your team who requires full access to the Account.

✓ Can add other administrators
✓ Can view secret keys
✓ Can view logs | ### Guest Roles Guest roles receive access via a magic link or a shared link depending on the [Environment](https://platform.flatfile.com/dashboard) `guestAuthentication` type. Guests roles can invite other Guests unless you turn off this setting in the [Guest Sidebar](/guides/customize-guest-sidebar). The `accessToken` provided will remain valid for a duration of 1 hour. Data Clips can provide granular guest access to a specific [sheet](/core-concepts/sheets) by sharing only selected records from that sheet within a [workbook](/core-concepts/workbooks). However, if the clipped sheet includes [reference fields](/core-concepts/fields#reference) that point to other sheets in the same workbook, guests will also receive read access to those referenced records to support lookups and validation. Learn more in the [Data Clips guide](/legacy-docs/advanced-guides/dataclips). #### Space Grant | Role | Details | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Single-Space Guest | This role is meant for a guest who has access to only one Space. Such guests can be invited to additional Spaces at any time. | | Multi-Space Guest | This role is meant for a guest who has access to multiple Spaces. They will see a drop-down next to the Space name that enables them to switch between Spaces. | #### Workbook Grant | Role | Details | | --------------------- | ------------------------------------------------------------------------------------------ | | Single-Workbook Guest | This role is meant for a guest who should have access to only one Workbook within a Space. | | Multi-Workbook Guest | This role is intended for a guest who has access to multiple Workbooks within a Space. | This role can only be configured using code. See code example. ```js theme={null} const createGuest = await api.guests.create({ environmentId: "us_env_hVXkXs0b", email: "guest@example.com", name: "Mr. Guest", spaces: [ { id: "us_sp_DrdXetPN", workbooks: [ { id: "us_wb_qGZbKwDW", }, ], }, ], }); ``` #### Guest Lifecycle When a guest user is deleted, all their space connections are automatically removed to ensure security. This means: * The guest loses access to all previously connected spaces * They cannot regain access to these spaces without being explicitly re-invited This automatic cleanup ensures that deleted guests cannot retain any access to spaces, even if they are later recreated with the same email address. ## API Reference For detailed API documentation on authentication endpoints, see the [Authentication API Reference](https://reference.flatfile.com/api-reference/auth). For programmatic management of Personal Access Tokens, see the [Personal Access Tokens API Reference](https://reference.flatfile.com/api-reference/auth/personal-access-tokens). --- # Source: https://flatfile.com/docs/getting-started/quickstart/autobuild.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Getting Started with AutoBuild > Get up and running with Flatfile in minutes using AutoBuild to create a complete data import solution ## What is AutoBuild? The easiest way to get started with Flatfile is using AutoBuild. With AutoBuild, you can transform existing import templates or documentation into a fully functional Flatfile app in minutes. Simply drop your example files into AutoBuild, and it will automatically create and deploy a [Blueprint](/core-concepts/blueprints) (for schema definition) and a [Listener](/core-concepts/listeners) (for validations and transformations) to your Flatfile [App](/core-concepts/apps). Once you've started with AutoBuild, you can always download your Listener code and continue building with code from there! ## Setting Up Your Account To get started, you'll need to [sign up for a Flatfile account](https://platform.flatfile.com/oauth/login). During account setup, enter your company name and select "Start with an existing template or project file." If you already have an active Flatfile account, you can still use AutoBuild to create a new app. From the Flatfile dashboard, click the "New App" button. Then select "Build with AutoBuild." If the AutoBuild option isn't available on your account, please reach out to support via [Slack](https://flatfile.com/join-slack/) or [Email](mailto:support@flatfile.com) to gain access!{" "} ## Uploading Files and Context Next, you'll upload files and provide additional context to the AutoBuild agent. You can upload any of the following to help the AI understand your requirements: * Import templates * System documentation * Complete data files * Any other files that provide useful context You may also provide an additional prompt to guide the AutoBuild agent. Use this to give context about your uploaded files, explain specific data challenges, or outline additional requirements. When you're ready, click "Get Started." The Flatfile AutoBuild agent will now build your space template. ## Working in Build Mode After a few moments, you'll be taken to your new Flatfile app in Build Mode, which you can access anytime to make changes. On the right side, you'll see the blueprint of your space. Here you can inspect and edit the sheets and fields that the AutoBuild agent has generated. You can easily add or remove fields, update constraints and validations, or make other basic edits to your blueprint. For more advanced changes, you can chat with the Flatfile Assistant. The Assistant can help you with anything from small tweaks to complex validations, data egress actions, or large reorganization of your sheets. At any point, you can check the Data Preview tab to see what your Flatfile project will look like for your users. You can add or edit data to test your validations and transformations. ## Deploying Your App When you're finished building your space, click "Configure & Deploy." You'll be prompted to give your app a name, and then it's ready to be deployed! From here, you'll be taken to your new app in the dashboard. Your autobuild agent is deployed and you're ready to create your first project and start importing data! --- # Source: https://flatfile.com/docs/plugins/autocast.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Autocast Plugin > Automatically convert data in a Flatfile Sheet to match the data types defined in the corresponding Blueprint The Autocast plugin is an opinionated transformer for Flatfile that automatically converts data in a Sheet to match the data types defined in the corresponding Blueprint (Schema). It operates on the `commit:created` event, meaning it processes records after they have been committed to the sheet. Its primary purpose is to clean and standardize data by ensuring that values intended to be numbers, booleans, or dates are correctly typed, even if they are imported as strings. For example, it can convert the string "1,000" to the number `1000`, the string "yes" to the boolean `true`, and the string "08/16/2023" to a standardized UTC date string. This plugin is useful in any scenario where source data may have inconsistent or incorrect data types, saving developers from writing manual data-casting logic. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-autocast ``` ## Configuration & Parameters ### Main Plugin Function The `autocast` function accepts the following parameters: The slug of the sheet that the plugin should monitor and apply autocasting to. An optional array of field keys. If provided, the plugin will only attempt to cast values in the specified fields. **Default Behavior:** If not provided, the plugin will automatically attempt to cast all fields in the sheet that are not of type 'string' in the Blueprint (i.e., it will target number, boolean, and date fields by default). Configuration options for performance and debugging: Specifies the number of records to process in each batch. This is passed down to the underlying bulk record hook. Specifies how many chunks to process in parallel. This is passed down to the underlying bulk record hook. An optional flag to enable debug logging. ## Usage Examples ### Basic Usage Apply autocasting to all supported fields on a sheet: ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use(autocast('contacts')); export default listener; ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use(autocast('contacts')); export default listener; ``` ### Targeted Field Casting Apply autocasting to only specific fields: ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use(autocast('contacts', ['annualRevenue', 'subscribed'])); export default listener; ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use(autocast('contacts', ['annualRevenue', 'subscribed'])); export default listener; ``` ### Advanced Configuration Configure field filters and adjust performance settings: ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use( autocast('contacts', ['annualRevenue', 'subscribed'], { chunkSize: 5000, parallel: 2, debug: true, }) ); export default listener; ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { autocast } from '@flatfile/plugin-autocast'; const listener = new FlatfileListener(); listener.use( autocast('contacts', ['annualRevenue', 'subscribed'], { chunkSize: 5000, parallel: 2, debug: true, }) ); export default listener; ``` ### Using Utility Functions The plugin also exports individual casting utility functions: ```javascript JavaScript theme={null} import { castNumber, castBoolean, castDate } from '@flatfile/plugin-autocast'; // Cast numbers (handles commas) const num1 = castNumber('1,234.56'); // Returns 1234.56 const num2 = castNumber(99); // Returns 99 // Cast booleans const bool1 = castBoolean('yes'); // Returns true const bool2 = castBoolean(0); // Returns false const bool3 = castBoolean('f'); // Returns false // Cast dates const date1 = castDate('08/16/2023'); // Returns 'Wed, 16 Aug 2023 00:00:00 GMT' const date2 = castDate(1692144000000); // Returns 'Wed, 16 Aug 2023 00:00:00 GMT' ``` ```typescript TypeScript theme={null} import { castNumber, castBoolean, castDate, TRecordValue } from '@flatfile/plugin-autocast'; // Cast numbers (handles commas) const num1: number = castNumber('1,234.56'); // Returns 1234.56 const num2: number = castNumber(99); // Returns 99 // Cast booleans const bool1: boolean = castBoolean('yes'); // Returns true const bool2: boolean = castBoolean(0); // Returns false const bool3: boolean = castBoolean('f'); // Returns false // Cast dates const date1: string = castDate('08/16/2023'); // Returns 'Wed, 16 Aug 2023 00:00:00 GMT' const date2: string = castDate(1692144000000); // Returns 'Wed, 16 Aug 2023 00:00:00 GMT' ``` ## Troubleshooting ### Error Handling with Utility Functions The individual casting utility functions throw errors when values cannot be converted: ```javascript JavaScript theme={null} import { castNumber, castBoolean, castDate } from '@flatfile/plugin-autocast'; try { const invalidNum = castNumber('not a number'); } catch (e) { console.error(e.message); // Prints: Invalid number } try { const invalidBool = castBoolean('maybe'); } catch (e) { console.error(e.message); // Prints: Invalid boolean } try { const invalidDate = castDate('not a date'); } catch (e) { console.error(e.message); // Prints: Invalid date } ``` ```typescript TypeScript theme={null} import { castNumber, castBoolean, castDate } from '@flatfile/plugin-autocast'; try { const invalidNum = castNumber('not a number'); } catch (e: any) { console.error(e.message); // Prints: Invalid number } try { const invalidBool = castBoolean('maybe'); } catch (e: any) { console.error(e.message); // Prints: Invalid boolean } try { const invalidDate = castDate('not a date'); } catch (e: any) { console.error(e.message); // Prints: Invalid date } ``` ## Notes ### Event Trigger The plugin is designed to run on the `listener.on('commit:created')` event. ### Plugin Order This plugin runs on the same event as `recordHook` and `bulkRecordHook`. The order in which you `.use()` the plugins in your listener matters, as they will execute sequentially. ### Error Handling Pattern The main `autocast` plugin does not throw errors. Instead, if a value cannot be cast, it attaches an error message directly to the record's cell using `record.addError()`. This makes the errors visible to the user in the Flatfile UI. The individual `cast*` utility functions, however, do throw an `Error` on failure. ### Supported Types The plugin automatically targets fields of type `number`, `boolean`, and `date` as defined in the Sheet's Blueprint. It does not attempt to cast `string` fields by default. ### Boolean Casting * **Truthy values:** `'1'`, `'yes'`, `'true'`, `'on'`, `'t'`, `'y'`, and `1` * **Falsy values:** `'-1'`, `'0'`, `'no'`, `'false'`, `'off'`, `'f'`, `'n'`, `0`, and `-1` ### Date Casting All parsed dates are converted to a standardized UTC string format. ISO 8601 formats like `YYYY-MM-DD` are treated as UTC, while other formats like `MM/DD/YYYY` are assumed to be local time and are converted to UTC. --- # Source: https://flatfile.com/docs/plugins/automap.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Automap Plugin > Automatically map columns for headless data import workflows in Flatfile with configurable confidence levels The Automap plugin is designed for headless data import workflows in Flatfile. Its primary purpose is to automate the column mapping process. The plugin listens for successfully extracted files, and when a matching file is found, it automatically creates and executes a mapping job to a specified destination Sheet. This is ideal for scenarios where files with consistent schemas are uploaded programmatically, bypassing the need for a user to manually map columns in the UI. The plugin determines whether to proceed with the mapping based on a configurable confidence level, ensuring that only high-quality matches are automated. If the mapping confidence is too low, it can trigger a failure callback for custom notifications or alternative handling. ## Installation ```bash npm theme={null} npm install @flatfile/plugin-automap ``` ```bash yarn theme={null} yarn add @flatfile/plugin-automap ``` ## Configuration & Parameters The `automap` function accepts an `AutomapOptions` configuration object with the following parameters: ### Required Parameters Controls the minimum confidence level required for the plugin to automatically execute the mapping job. * `'confident'`: All mapped fields must have a confidence level of 'strong' (> 90%) or 'absolute' (100%) * `'exact'`: All mapped fields must have a confidence level of 'absolute' (100%) ### Optional Parameters Toggles verbose logging for development and troubleshooting. When true, the plugin will output detailed information about its progress, decisions, and any errors it encounters to the console. Specifies the destination sheet for the imported data. * If a string is provided, it must be the exact slug of the target sheet * If a function is provided, it receives the uploaded file's name and the event payload, and must return the target sheet slug (or a Promise that resolves to it) * **Default behavior**: If not provided, the plugin will not be able to map a single-sheet file automatically unless more advanced logic is implemented by the user A regular expression used to filter which files the plugin should process. * **Default behavior**: If not provided, the plugin will attempt to automap every file that is uploaded * The plugin will only act on files whose names pass a `test()` against this regex A callback function that is executed if the automapping process is aborted due to low mapping confidence. * **Default behavior**: Nothing happens on failure, though a warning may be logged if `debug` is true * This can be used to trigger notifications (e.g., email, SMS, webhook) to alert a user that manual intervention is required Specifies the destination Workbook by its ID or name. * **Default behavior**: If not provided, the plugin searches for a suitable workbook in the space. It filters out workbooks associated with raw files (those with a 'file' label). If only one workbook remains, it is chosen. If multiple remain, it will select the one with the 'primary' label. Prevents the plugin from updating the name of the processed file in the Flatfile UI. * By default, the plugin prepends "⚡️" to the file name on processing and appends the destination sheet name on success to provide visual feedback * Setting this to `true` disables this behavior ## Usage Examples ### Basic Usage This example shows the simplest way to use the automap plugin, targeting a specific sheet for all uploaded CSV files. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'confident', defaultTargetSheet: 'Contacts', matchFilename: /\.csv$/, }) ); }); ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'confident', defaultTargetSheet: 'Contacts', matchFilename: /\.csv$/, }) ); }); ``` ### Configuration with Failure Handling This example demonstrates a more complete configuration, including a failure callback and targeting a specific workbook. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'confident', defaultTargetSheet: 'Contacts', targetWorkbook: 'MyPrimaryWorkbook', matchFilename: /^(contacts|people|users)\.csv$/i, debug: true, onFailure: (event) => { console.error( `Automap failed for file in space ${event.context.spaceId}. Please map manually.` ); // Add custom logic here, like sending an email or Slack message. }, }) ); }); ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; import type { FlatfileEvent } from '@flatfile/listener'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'confident', defaultTargetSheet: 'Contacts', targetWorkbook: 'MyPrimaryWorkbook', matchFilename: /^(contacts|people|users)\.csv$/i, debug: true, onFailure: (event: FlatfileEvent) => { console.error( `Automap failed for file in space ${event.context.spaceId}. Please map manually.` ); // Add custom logic here, like sending an email or Slack message. }, }) ); }); ``` ### Dynamic Sheet Targeting This example uses a function for `defaultTargetSheet` to dynamically route data to different sheets based on the filename. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'exact', defaultTargetSheet: (fileName) => { if (fileName.includes('invoice')) { return 'Invoices'; } else if (fileName.includes('contact')) { return 'Contacts'; } // Return a default or handle cases where no match is found return 'DefaultSheet'; }, onFailure: (event) => { console.log('Automap failed, manual mapping required.'); }, }) ); }); ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; import type { FlatfileEvent } from '@flatfile/listener'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'exact', defaultTargetSheet: (fileName?: string): string => { if (fileName?.includes('invoice')) { return 'Invoices'; } else if (fileName?.includes('contact')) { return 'Contacts'; } // Return a default or handle cases where no match is found return 'DefaultSheet'; }, onFailure: (event: FlatfileEvent) => { console.log('Automap failed, manual mapping required.'); }, }) ); }); ``` ## Troubleshooting The most effective way to troubleshoot the plugin is to set the `debug: true` option in the configuration. This will provide a step-by-step log of the plugin's execution, including: * Which files are matched * What workbooks and sheets are targeted * The contents of the mapping plan * The reason for any failures ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'exact', defaultTargetSheet: 'Contacts', debug: true, // Enable verbose logging onFailure: (event) => { const { spaceId, fileId } = event.context; console.error( `Could not automap file ${fileId} with 'exact' accuracy. ` + `Please visit space ${spaceId} to map it manually.` ); }, }) ); }); ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { automap } from '@flatfile/plugin-automap'; import type { FlatfileEvent } from '@flatfile/listener'; const listener = FlatfileListener.create((listener) => { listener.use( automap({ accuracy: 'exact', defaultTargetSheet: 'Contacts', debug: true, // Enable verbose logging onFailure: (event: FlatfileEvent) => { const { spaceId, fileId } = event.context; console.error( `Could not automap file ${fileId} with 'exact' accuracy. ` + `Please visit space ${spaceId} to map it manually.` ); }, }) ); }); ``` ## Notes ### Default Behavior * **File processing**: If no `matchFilename` is provided, the plugin will attempt to automap every uploaded file * **Target sheet**: It is highly recommended to set `defaultTargetSheet` for basic workflows, as the plugin cannot map single-sheet files automatically without it * **Workbook selection**: When `targetWorkbook` is not specified, the plugin filters out file-associated workbooks and selects the remaining one, or the one with the 'primary' label if multiple exist * **File naming**: By default, the plugin updates file names with status indicators ("⚡️" during processing, destination sheet name on success) ### Special Considerations * This plugin is intended for use in a server-side listener, not in the browser * The plugin relies on two key events: `job:completed:file:extract` to start the process, and `job:updated:workbook:map` to check the mapping plan * The logic for selecting a `targetWorkbook` works best when there's a clear primary workbook in the space ### Limitations * The `accuracy` check is all-or-nothing. If even one column mapping does not meet the required confidence level, the entire automatic mapping job is aborted * The plugin's default behavior works best with single-sheet source files. For multi-sheet source files, you must provide more complex logic * For internal errors (e.g., API call failures, inability to find a file or workbook), the plugin uses `try/catch` blocks and logs errors to the console, which are more verbose when `debug` is set to `true` ### Error Handling Patterns The primary pattern for user-defined error handling is the `onFailure` callback, which is triggered when mapping confidence is too low. This allows you to implement custom notification systems or alternative workflows when automatic mapping cannot proceed. --- # Source: https://flatfile.com/docs/core-concepts/blueprints.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Blueprints > Define your data schema to structure exactly how data should look, behave, and connect ## What is a Blueprint? Blueprints enable you to create repeatable, reliable data import experiences that scale with your needs while maintaining data quality and user experience standards. A Blueprint is your complete data definition in Flatfile. It controls how your data should look, behave, and connect—from simple field validations (like `unique` and `required`) to complex [relationships](/core-concepts/fields#reference) between [sheets](/core-concepts/sheets). You can even create [filtered reference fields](/core-concepts/fields#reference-field-filtering) that dynamically control available dropdown options based on other field values. Think of it as an intelligent template that ensures you collect the right data in the right format, every time. **Terminology Note**: "Blueprint" is Flatfile's term for what might be called a "schema" in other systems. Throughout Flatfile's documentation and API, we use "Blueprint" as the standard term for data structure definitions to distinguish Flatfile's comprehensive data modeling approach from generic schema concepts. ## How Blueprints Work Every [Space](/core-concepts/spaces) has exactly one Blueprint that defines its data structure. Whenever a new space is created, the Flatfile Platform automatically triggers a `space:configure` [Job](/core-concepts/jobs), and you can configure a [Listener](/core-concepts/listeners) to pick up that job and configure the new space by defining its Blueprint. Creating workbooks, sheets, and actions **is** your Blueprint definition, establishing the data schema that will govern all data within that Space. To make that part easier, we have provided the [Space Configure Plugin](/plugins/space-configure) to abstract away the Job/Listener code, allowing you to focus on what matters: Preparing your space for data. ## Basic Blueprint Structure * A [Blueprint](/core-concepts/blueprints) defines the data structure for any number of [Spaces](/core-concepts/spaces) * A [Space](/core-concepts/blueprints) may contain many [Workbooks](/core-concepts/workbooks) and many [Documents](/core-concepts/documents) * A [Document](/core-concepts/documents) contains static documentation and may contain many [Document-level Actions](/guides/using-actions#document-actions) * A [Workbook](/core-concepts/workbooks) may contain many [Sheets](/core-concepts/sheets) and many [Workbook-level Actions](/guides/using-actions#workbook-actions) * A [Sheet](/core-concepts/sheets) may contain many [Fields](/core-concepts/fields) and many [Sheet-level Actions](/guides/using-actions#sheet-actions) * A [Field](/core-concepts/fields) defines a single column of data, and may contain many [Field-level Actions](/guides/using-actions#field-actions) **A note about Actions:** Actions also require a listener to respond to the event published by clicking on them. For more, see [Using Actions](/guides/using-actions) ## Example Blueprint Configuration **Recommendation:** Although throughout the documentation we'll be explicitly defining each level of a blueprint, it's important to note that you can split each of your **Workbooks**, **Sheets**, **Documents**, and **Actions** definitions into separate files and import them. Then your Workbook blueprint can be as simple as: ```javascript theme={null} const companyWorkbook = { name: "Company Workbook", documents: [dataProcessingSteps] sheets: [usersSheet], actions: [exportToCRM], }; ``` This leads to a more maintainable codebase, and the modularity opens the door for code reuse. For instance, you'll be able to use `usersSheet.slug` in your listener code to filter or differentiate between sheets, or re-use `exportToSCRM` in any other workbook that needs to export data to a CRM. This example shows a Blueprint definition for [Space configuration](/core-concepts/spaces#space-configuration). It creates a single [Workbook](/core-concepts/workbooks) with a single [Document](/core-concepts/documents) and a single [Sheet](/core-concepts/sheets) containing two [Fields](/core-concepts/fields) and one [Action](/core-concepts/actions). ```javascript theme={null} const workbooks = [{ name: "Company Workbook", documents: [ { title: "Data Processing Walkthrough", body: "1. Add Data\n2. Process Data\n3. Export Data", actions: [ { operation: "confirm", label: "Confirm", type: "string", primary: true, }, ], }, ], sheets: [ { name: "Users", slug: "users", fields: [ { key: "fname", type: "string", label: "First Name", }, { key: "lname", type: "string", label: "Last Name", }, ], actions: [ { operation: "validate-inventory", mode: "background", label: "Validate Inventory", description: "Check product availability against inventory system", }, ], }, ], actions: [ { operation: "export-to-crm", mode: "foreground", label: "Export to CRM", description: "Send validated customers to Salesforce", }, ], }]; ``` ## Workbook Folders and Sheet Collections Although they have no impact on your data itself or its structure, [Workbook Folders](/core-concepts/workbooks#folders) and [Sheet Collections](/core-concepts/sheets#collections) are a powerful way to organize your data in the Flatfile UI. They are essentially named labels that you assign to your Workbooks and Sheets, which the Flatfile UI interprets to group them together (and apart from others). You can define them directly in your [Blueprint](/core-concepts/blueprints) when [configuring your Space](/core-concepts/spaces#space-configuration) or when otherwise creating or updating a Workbook or Sheet via the [API](https://reference.flatfile.com). You can think of **Folders** and **Collections** like a filing system: * [Folders](/core-concepts/workbooks#folders) help you organize your Workbooks within a Space (like organizing binders on a shelf). * [Collections](/core-concepts/sheets#collections) help you organize Sheets within each Workbook (like organizing tabs within a binder). This is a great way to declutter your Sidebar and keep your data organized and easy to find in the Flatfile UI. In the following example, we have several Workbooks grouped into two Folders: * **Analytics** (folded) * **Business Operations** (unfolded) The **Business Operations** Workbooks each contain several Sheets grouped into Collections: * **Compensation** and **Personel** * **Stock Management** and **Vendor Management** ```javascript theme={null} const salesReportWorkbook = { name: "Sales Analytics", folder: "Analytics", sheets: [ // Source Data collection (2 sheets) salesDataSheet, revenueSheet, // Analytics collection (2 sheets) campaignMetricsSheet, leadSourcesSheet ] }; const humanResourcesWorkbook = { name: "Human Resources Management", folder: "Business Operations", sheets: [ // Personnel collection (2 sheets) employeesSheet, departmentsSheet, // Compensation collection (2 sheets) payrollSheet, benefitsSheet ] }; const operationsWorkbook = { name: "Operations Management", folder: "Business Operations", sheets: [ // Stock Management collection (2 sheets) inventorySheet, warehousesSheet, // Vendor Management collection (2 sheets) suppliersSheet, purchaseOrdersSheet ] }; ``` ```javascript theme={null} const salesDataSheet = { name: "Sales Data", collection: "Source Data", fields: [ { key: "name", type: "string", label: "Customer Name" }, { key: "email", type: "string", label: "Email Address" } ] }; const revenueSheet = { name: "Revenue", collection: "Analytics", fields: [ { key: "revenue", type: "number", label: "Revenue" } ] }; const campaignMetricsSheet = { name: "Campaign Metrics", collection: "Analytics", fields: [ { key: "impressions", type: "number", label: "Impressions" }, { key: "clicks", type: "number", label: "Clicks" } ] }; const leadSourcesSheet = { name: "Lead Sources", collection: "Analytics", fields: [ { key: "source", type: "string", label: "Source" }, { key: "conversion_rate", type: "number", label: "Conversion Rate" } ] }; const employeesSheet = { name: "Employees", collection: "Personnel", fields: [ { key: "employee_id", type: "string", label: "Employee ID" }, { key: "name", type: "string", label: "Full Name" }, { key: "department", type: "string", label: "Department" }, { key: "hire_date", type: "date", label: "Hire Date" } ] }; const departmentsSheet = { name: "Departments", collection: "Personnel", fields: [ { key: "dept_code", type: "string", label: "Department Code" }, { key: "dept_name", type: "string", label: "Department Name" }, { key: "manager", type: "string", label: "Manager" } ] }; const positionsSheet = { name: "Job Positions", collection: "Personnel", fields: [ { key: "position_id", type: "string", label: "Position ID" }, { key: "title", type: "string", label: "Job Title" }, { key: "level", type: "string", label: "Job Level" }, { key: "department", type: "string", label: "Department" } ] }; const payrollSheet = { name: "Payroll", collection: "Compensation", fields: [ { key: "employee_id", type: "string", label: "Employee ID" }, { key: "salary", type: "number", label: "Annual Salary" }, { key: "bonus", type: "number", label: "Bonus" } ] }; const benefitsSheet = { name: "Benefits", collection: "Compensation", fields: [ { key: "benefit_type", type: "string", label: "Benefit Type" }, { key: "cost", type: "number", label: "Monthly Cost" }, { key: "coverage", type: "string", label: "Coverage Level" } ] }; const bonusesSheet = { name: "Performance Bonuses", collection: "Compensation", fields: [ { key: "employee_id", type: "string", label: "Employee ID" }, { key: "performance_rating", type: "string", label: "Performance Rating" }, { key: "bonus_amount", type: "number", label: "Bonus Amount" }, { key: "quarter", type: "string", label: "Quarter" } ] }; const attendanceSheet = { name: "Attendance", collection: "Time Tracking", fields: [ { key: "employee_id", type: "string", label: "Employee ID" }, { key: "date", type: "date", label: "Date" }, { key: "hours_worked", type: "number", label: "Hours Worked" }, { key: "overtime", type: "number", label: "Overtime Hours" } ] }; const leaveRequestsSheet = { name: "Leave Requests", collection: "Time Tracking", fields: [ { key: "request_id", type: "string", label: "Request ID" }, { key: "employee_id", type: "string", label: "Employee ID" }, { key: "leave_type", type: "string", label: "Leave Type" }, { key: "start_date", type: "date", label: "Start Date" }, { key: "end_date", type: "date", label: "End Date" } ] }; const inventorySheet = { name: "Inventory", collection: "Stock Management", fields: [ { key: "sku", type: "string", label: "SKU" }, { key: "product_name", type: "string", label: "Product Name" }, { key: "quantity", type: "number", label: "Quantity in Stock" }, { key: "reorder_level", type: "number", label: "Reorder Level" } ] }; const warehousesSheet = { name: "Warehouses", collection: "Stock Management", fields: [ { key: "warehouse_id", type: "string", label: "Warehouse ID" }, { key: "location", type: "string", label: "Location" }, { key: "capacity", type: "number", label: "Storage Capacity" }, { key: "manager", type: "string", label: "Warehouse Manager" } ] }; const stockMovementsSheet = { name: "Stock Movements", collection: "Stock Management", fields: [ { key: "movement_id", type: "string", label: "Movement ID" }, { key: "sku", type: "string", label: "SKU" }, { key: "quantity", type: "number", label: "Quantity" }, { key: "movement_type", type: "string", label: "Movement Type" }, { key: "date", type: "date", label: "Date" } ] }; const suppliersSheet = { name: "Suppliers", collection: "Vendor Management", fields: [ { key: "supplier_id", type: "string", label: "Supplier ID" }, { key: "company_name", type: "string", label: "Company Name" }, { key: "contact_person", type: "string", label: "Contact Person" }, { key: "email", type: "string", label: "Email" } ] }; const purchaseOrdersSheet = { name: "Purchase Orders", collection: "Vendor Management", fields: [ { key: "order_id", type: "string", label: "Order ID" }, { key: "supplier_id", type: "string", label: "Supplier ID" }, { key: "order_date", type: "date", label: "Order Date" }, { key: "total_amount", type: "number", label: "Total Amount" } ] }; const vendorPerformanceSheet = { name: "Vendor Performance", collection: "Vendor Management", fields: [ { key: "supplier_id", type: "string", label: "Supplier ID" }, { key: "on_time_delivery", type: "number", label: "On-Time Delivery %" }, { key: "quality_rating", type: "number", label: "Quality Rating" }, { key: "cost_competitiveness", type: "number", label: "Cost Rating" } ] }; ``` --- # Source: https://flatfile.com/docs/plugins/boolean.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Boolean Validator > Comprehensive boolean validation plugin for Flatfile that handles various representations of boolean values with multi-language support and flexible configuration options. The Boolean Validator plugin for Flatfile provides comprehensive boolean validation for specified fields. It is designed to handle various representations of boolean values, not just `true` and `false`. Key features include two main validation modes: 'strict' (only accepts true/false boolean types) and 'truthy' (accepts values like 'yes', 'no', 'y', 'n', etc.). The plugin offers multi-language support for these truthy values (English, Spanish, French, German) and allows for custom mappings. It is highly configurable, with options to control case sensitivity, how null/undefined values are handled, and whether to automatically convert non-boolean values. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-validate-boolean ``` ## Configuration & Parameters The plugin is configured with a single object, `BooleanValidatorConfig`, containing the following options: ### Required Parameters **`fields`** `string[]` * An array of field keys (column names) to which the boolean validation should be applied. **`validationType`** `'strict' | 'truthy'` * The type of validation to perform: * `'strict'`: Only allows `true` and `false` boolean values * `'truthy'`: Allows string representations like 'yes', 'no', etc. ### Optional Parameters **`sheetSlug`** `string` * The slug of a specific sheet to apply the validation to * Default: `'**'` (all sheets) **`language`** `'en' | 'es' | 'fr' | 'de'` * Specifies the language for predefined 'truthy' mappings * Default: `'en'` **`customMapping`** `Record` * Custom string-to-boolean mappings that override language-specific mappings * Example: `{ 'ja': true, 'nein': false }` **`caseSensitive`** `boolean` * Controls case sensitivity for string comparisons during 'truthy' validation * Default: `false` **`handleNull`** `'error' | 'false' | 'true' | 'skip'` * Defines how to handle `null` or `undefined` values: * `'error'`: Adds an error to the record * `'false'`: Converts the value to `false` * `'true'`: Converts the value to `true` * `'skip'`: Ignores the value without adding an error * Default: `'skip'` **`convertNonBoolean`** `boolean` * Attempts to convert non-boolean values using JavaScript's `Boolean()` casting * Default: `false` **`defaultValue`** `boolean | 'skip'` * Default value for invalid inputs instead of adding an error * Default: `undefined` (raises an error) **`customErrorMessages`** `object` * Custom error messages for validation failures * Properties: `invalidBoolean`, `invalidTruthy`, `nullValue` ## Usage Examples ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateBoolean } from '@flatfile/plugin-validate-boolean'; export default function(listener) { // Basic strict validation listener.use( validateBoolean({ fields: ['isActive'], validationType: 'strict', }) ); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateBoolean } from '@flatfile/plugin-validate-boolean'; export default function(listener: FlatfileListener) { // Basic strict validation listener.use( validateBoolean({ fields: ['isActive'], validationType: 'strict', }) ); } ``` ### Advanced Configuration ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateBoolean } from '@flatfile/plugin-validate-boolean'; export default function(listener) { listener.use( validateBoolean({ sheetSlug: 'contacts', fields: ['hasSubscription', 'isPremium'], validationType: 'truthy', language: 'es', // Use Spanish mappings: 'sí', 'no' handleNull: 'false', // Treat null/undefined as false defaultValue: false, // Set invalid values to false instead of erroring customErrorMessages: { nullValue: 'El campo no puede estar vacío.', }, }) ); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateBoolean } from '@flatfile/plugin-validate-boolean'; export default function(listener: FlatfileListener) { listener.use( validateBoolean({ sheetSlug: 'contacts', fields: ['hasSubscription', 'isPremium'], validationType: 'truthy', language: 'es', // Use Spanish mappings: 'sí', 'no' handleNull: 'false', // Treat null/undefined as false defaultValue: false, // Set invalid values to false instead of erroring customErrorMessages: { nullValue: 'El campo no puede estar vacío.', }, }) ); } ``` ### Using Helper Functions ```javascript JavaScript theme={null} import { validateBooleanField } from '@flatfile/plugin-validate-boolean'; const myValue = 'Y'; const result = validateBooleanField(myValue, { fields: ['customField'], validationType: 'truthy', language: 'en', // 'y' is a valid mapping in English }); if (result.error) { console.error(`Validation failed: ${result.error}`); } else { console.log(`Validated value: ${result.value}`); // Outputs: Validated value: true } ``` ```typescript TypeScript theme={null} import { validateBooleanField } from '@flatfile/plugin-validate-boolean'; const myValue = 'Y'; const result = validateBooleanField(myValue, { fields: ['customField'], validationType: 'truthy', language: 'en', // 'y' is a valid mapping in English }); if (result.error) { console.error(`Validation failed: ${result.error}`); } else { console.log(`Validated value: ${result.value}`); // Outputs: Validated value: true } ``` ## API Reference ### validateBoolean The main entry point for the plugin that configures and returns a Flatfile listener. **Signature:** ```typescript theme={null} validateBoolean(config: BooleanValidatorConfig): (listener: FlatfileListener) => void ``` **Parameters:** * `config` - Configuration object for the validator **Returns:** A function that can be passed to `listener.use()` to register the plugin. ### validateBooleanField A utility function that runs the complete validation logic for a single value. **Signature:** ```typescript theme={null} validateBooleanField(value: any, config: BooleanValidatorConfig): { value: boolean | null; error: string | null } ``` **Parameters:** * `value` - The value to validate * `config` - The configuration object **Returns:** Object with `value` (validated boolean or null) and `error` (error message or null) ### validateStrictBoolean Validates that a value is strictly a boolean `true` or `false`. **Signature:** ```typescript theme={null} validateStrictBoolean(value: any, config: BooleanValidatorConfig): { value: boolean | null; error: string | null } ``` ### validateTruthyBoolean Validates that a value corresponds to a "truthy" or "falsy" representation. **Signature:** ```typescript theme={null} validateTruthyBoolean(value: any, config: BooleanValidatorConfig): { value: boolean | null; error: string | null } ``` ### handleNullValue Processes a `null` or `undefined` value according to the `handleNull` configuration. **Signature:** ```typescript theme={null} handleNullValue(value: any, config: BooleanValidatorConfig): { value: boolean | null; error: string | null } ``` ### handleInvalidValue Processes a value that has been identified as invalid. **Signature:** ```typescript theme={null} handleInvalidValue(value: any, config: BooleanValidatorConfig): { value: boolean | null; error: string | null } ``` ## Troubleshooting ### Validation Not Applied * Ensure the `fields` array contains the correct field keys * Verify the `sheetSlug` (if used) matches the target sheet ### Case Sensitivity Issues For 'truthy' validation, if values like 'YES' aren't being validated correctly, check the `caseSensitive` option. It defaults to `false`, but if set to `true`, the case must match exactly. ### Unexpected Results Remember the order of operations: 1. Null handling is checked first 2. Specific validation type ('strict' or 'truthy') is applied 3. `defaultValue` is used as a final fallback for invalid values ## Notes ### Default Behavior If only the required `fields` and `validationType` options are provided, the plugin will apply validation to the specified fields on all sheets. For 'truthy' validation, it uses case-insensitive English mappings ('yes'/'no'). Null or undefined values are skipped by default. ### Special Considerations * The plugin supports built-in truthy/falsy mappings for English ('en'), Spanish ('es'), French ('fr'), and German ('de') * Custom mappings (`customMapping`) take precedence over language-based default mappings * The `sheetSlug` option allows applying different validation rules to different sheets within the same workbook ### Error Handling Patterns * The main plugin does not throw exceptions; it adds errors directly to Flatfile records * When a `defaultValue` is provided, the plugin corrects invalid values and adds an informational message for auditing * Helper functions return a consistent `{ value, error }` object pattern for easy error checking --- # Source: https://flatfile.com/docs/reference/cli.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # CLI Reference > Command line interface for developing, deploying, and managing Flatfile Agents The Flatfile Command Line Interface (CLI) provides tools to develop, deploy, and manage [Listeners](/core-concepts/listeners) in your Flatfile environment. Once listeners are deployed and hosted on Flatfile's secure cloud, they are called Agents. ## Installation ```bash theme={null} npx flatfile@latest ``` ## Configuration ### Authentication The CLI requires your Flatfile API key and Environment ID, provided either in Environment variables (ideally in a `.env` file) or as command flags. You can find your API key and Environment ID in your Flatfile dashboard under "[API Keys and Secrets](https://platform.flatfile.com/dashboard/keys-and-secrets)". **Recommended approach:** Use a `.env` file in your project root for secure, convenient, and consistent authentication. If you're using Git, make sure to add `.env` to your `.gitignore` file. **Using `.env` file** Create a `.env` file in your project root: ```bash theme={null} # .env file FLATFILE_API_KEY="your_api_key_here" FLATFILE_ENVIRONMENT_ID=your_environment_id_here ``` This approach keeps credentials out of your command history and makes it easy to switch between environments. **Using command flags** For one-off commands or CI/CD environments: ```bash theme={null} npx flatfile develop --token YOUR_API_KEY --env YOUR_ENV_ID ``` ### Regional Servers For improved performance and compliance, Flatfile supports regional deployments: | Region | API URL | | ------ | ---------------------------- | | US | platform.flatfile.com/api | | UK | platform.uk.flatfile.com/api | | EU | platform.eu.flatfile.com/api | | AU | platform.au.flatfile.com/api | | CA | platform.ca.flatfile.com/api | Set your regional URL in `.env`: ```bash theme={null} FLATFILE_API_URL=platform.eu.flatfile.com/api ``` Contact support to enable regional server deployment for your account. ## Development Workflow Use `develop` to run your listener locally with live reloading Use `deploy` to push your listener to Flatfile's cloud as an Agent Use `agents` commands to list, download, or delete deployed agents Use separate environments for development and production to avoid conflicts. The CLI will warn you when working in an environment with existing agents. ## Commands ### develop Run your listener locally with automatic file watching and live reloading. ```bash theme={null} npx flatfile develop [file-path] ``` **Options** | Option | Description | | ------------- | ---------------------------------------------------- | | `[file-path]` | Path to listener file (auto-detects if not provided) | | `--token` | Flatfile API key | | `--env` | Environment ID | **Features** * Live reloading on file changes * Real-time HTTP request logging * Low-latency event streaming (10-50ms) * Event handler visibility **Example output** ```bash theme={null} > npx flatfile develop ✔ 1 environment(s) found for these credentials ✔ Environment "development" selected ncc: Version 0.36.1 ncc: Compiling file index.js into CJS ✓ 427ms GET 200 https://platform.flatfile.com/api/v1/subscription 12345 File change detected. 🚀 ✓ Connected to event stream for scope us_env_1234 ▶ commit:created 10:13:05.159 AM us_evt_1234 ↳ on(**, {}) ↳ on(commit:created, {"sheetSlug":"contacts"}) ``` *** ### deploy Deploy your listener as a Flatfile Agent. ```bash theme={null} npx flatfile deploy [file-path] [options] ``` **Options** | Option | Description | | -------------- | ---------------------------------------------------- | | `[file-path]` | Path to listener file (auto-detects if not provided) | | `--slug`, `-s` | Unique identifier for the agent | | `--ci` | Disable interactive prompts for CI/CD | | `--token` | Flatfile API key | | `--env` | Environment ID | **File detection order** 1. `./index.js` 2. `./index.ts` 3. `./src/index.js` 4. `./src/index.ts` **Examples** ```bash theme={null} # Basic deployment npx flatfile deploy # Deploy with custom slug npx flatfile deploy --slug my-agent # CI/CD deployment npx flatfile deploy ./src/listener.ts --ci ``` **Multiple agents** Deploy multiple agents to the same environment using unique slugs: ```bash theme={null} npx flatfile deploy --slug agent-one npx flatfile deploy --slug agent-two ``` Without a slug, the CLI updates your existing agent or creates one with slug `default`. *** ### agents list Display all deployed agents in your environment. ```bash theme={null} npx flatfile agents list ``` Shows each agent's: * Agent ID * Slug * Deployment status * Last activity *** ### agents download Download a deployed agent's source code. ```bash theme={null} npx flatfile agents download ``` **Use cases** * Examine deployed code * Modify existing agents * Back up source code * Debug deployment issues Use `agents list` to find the agent slug you need. *** ### agents delete Remove a deployed agent. ```bash theme={null} npx flatfile agents delete ``` **Options** | Option | Description | | ------------------ | ---------------------------- | | `--agentId`, `-ag` | Use agent ID instead of slug | *** ## Related Resources * [Listeners](/core-concepts/listeners) - Core concept documentation * [Events](/reference/events) - Event system reference --- # Source: https://flatfile.com/docs/plugins/constraints.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Constraints Plugin > Extend Flatfile validation capabilities with custom validation logic for complex field and sheet-level constraints The Constraints plugin extends Flatfile's validation capabilities by allowing developers to define custom validation logic, called "external constraints," within a listener. These custom rules can then be applied to specific fields or to the entire sheet through the blueprint configuration. The main purpose is to handle complex validation scenarios that are not covered by Flatfile's standard built-in constraints. Use cases include: * Field-level validation based on complex logic (e.g., checking a value's format against a specific regular expression not available by default) * Cross-field validation where the validity of one field depends on the value of another (e.g., ensuring 'endDate' is after 'startDate') * Validating data against an external system or API (e.g., checking if a product SKU exists in an external database) * Applying a single validation rule to multiple fields simultaneously The plugin works by matching a `validator` key in the blueprint with a corresponding handler registered in the listener. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-constraints ``` ## Configuration & Parameters Configuration for this plugin is not set on the plugin itself, but within the Sheet's blueprint configuration. The plugin reads this blueprint to apply the correct logic. ### Field-Level Constraints For field-level constraints (used with `externalConstraint`), add a constraint object to a field's `constraints` array: | Parameter | Type | Required | Description | | ----------- | ------ | -------- | ---------------------------------------------------------------------------------------- | | `type` | string | Yes | Must be set to 'external' to indicate it's a custom validation rule | | `validator` | string | Yes | A unique name for your validator used to link the blueprint rule to the validation logic | | `config` | object | No | An arbitrary object containing any parameters or settings your validation logic needs | ### Sheet-Level Constraints For sheet-level constraints (used with `externalSheetConstraint`), add a constraint object to the sheet's top-level `constraints` array: | Parameter | Type | Required | Description | | ----------- | --------- | -------- | ----------------------------------------------------------- | | `type` | string | Yes | Must be set to 'external' | | `validator` | string | Yes | A unique name for your sheet-level validator | | `fields` | string\[] | Yes | An array of field keys that this constraint applies to | | `config` | object | No | An arbitrary object with settings for your validation logic | ### Default Behavior If no `external` type constraints are defined in the blueprint, the plugin will have no effect. The validation logic only runs when a matching `validator` is found in the blueprint for the current sheet. ## Usage Examples ### Basic Field-Level Constraint ```javascript JavaScript theme={null} // In your listener file (e.g., index.js) import { listener } from '@flatfile/listener' import { externalConstraint } from '@flatfile/plugin-constraints' listener.use( externalConstraint('minLength', (value, key, { config, record }) => { if (typeof value === 'string' && value.length < config.len) { record.addError(key, `Must be at least ${config.len} characters.`) } }) ) // In your blueprint file (e.g., workbook.js) const blueprint = { sheets: [ { name: 'Promotions', slug: 'promos', fields: [ { key: 'promo_code', type: 'string', label: 'Promo Code', constraints: [ { type: 'external', validator: 'minLength', config: { len: 8 } }, ], }, ], }, ], } ``` ```typescript TypeScript theme={null} // In your listener file (e.g., index.ts) import { listener } from '@flatfile/listener' import { externalConstraint } from '@flatfile/plugin-constraints' listener.use( externalConstraint('minLength', (value, key, { config, record }) => { if (typeof value === 'string' && value.length < config.len) { record.addError(key, `Must be at least ${config.len} characters.`) } }) ) // In your blueprint file (e.g., workbook.ts) const blueprint = { sheets: [ { name: 'Promotions', slug: 'promos', fields: [ { key: 'promo_code', type: 'string', label: 'Promo Code', constraints: [ { type: 'external', validator: 'minLength', config: { len: 8 } }, ], }, ], }, ], } ``` ### Configurable Constraint ```javascript JavaScript theme={null} // In your listener file (e.g., index.js) import { listener } from '@flatfile/listener' import { externalConstraint } from '@flatfile/plugin-constraints' // This 'length' validator can be used for min or max length checks listener.use( externalConstraint('length', (value, key, { config, record }) => { if (typeof value !== 'string') return if (config.max && value.length > config.max) { record.addError(key, `Text must be under ${config.max} characters.`) } if (config.min && value.length < config.min) { record.addError(key, `Text must be over ${config.min} characters.`) } }) ) // In your blueprint file (e.g., workbook.js) const blueprint = { sheets: [ { name: 'Content', slug: 'content', fields: [ { key: 'title', type: 'string', label: 'Title', constraints: [ { type: 'external', validator: 'length', config: { max: 50 } }, ], }, { key: 'description', type: 'string', label: 'Description', constraints: [ { type: 'external', validator: 'length', config: { min: 10 } }, ], }, ], }, ], } ``` ```typescript TypeScript theme={null} // In your listener file (e.g., index.ts) import { listener } from '@flatfile/listener' import { externalConstraint } from '@flatfile/plugin-constraints' // This 'length' validator can be used for min or max length checks listener.use( externalConstraint('length', (value, key, { config, record }) => { if (typeof value !== 'string') return if (config.max && value.length > config.max) { record.addError(key, `Text must be under ${config.max} characters.`) } if (config.min && value.length < config.min) { record.addError(key, `Text must be over ${config.min} characters.`) } }) ) // In your blueprint file (e.g., workbook.ts) const blueprint = { sheets: [ { name: 'Content', slug: 'content', fields: [ { key: 'title', type: 'string', label: 'Title', constraints: [ { type: 'external', validator: 'length', config: { max: 50 } }, ], }, { key: 'description', type: 'string', label: 'Description', constraints: [ { type: 'external', validator: 'length', config: { min: 10 } }, ], }, ], }, ], } ``` ### Sheet-Level Constraint ```javascript JavaScript theme={null} // In your listener file (e.g., index.js) import { listener } from '@flatfile/listener' import { externalSheetConstraint } from '@flatfile/plugin-constraints' listener.use( externalSheetConstraint('contact-required', (values, keys, { record }) => { if (!values.email && !values.phone) { const message = 'Either Email or Phone must be provided.' // Add the error to both fields keys.forEach((key) => record.addError(key, message)) } }) ) // In your blueprint file (e.g., workbook.js) const blueprint = { sheets: [ { name: 'Contacts', slug: 'contacts', fields: [ { key: 'email', type: 'string', label: 'Email' }, { key: 'phone', type: 'string', label: 'Phone' }, ], constraints: [ { type: 'external', validator: 'contact-required', fields: ['email', 'phone'], }, ], }, ], } ``` ```typescript TypeScript theme={null} // In your listener file (e.g., index.ts) import { listener } from '@flatfile/listener' import { externalSheetConstraint } from '@flatfile/plugin-constraints' listener.use( externalSheetConstraint('contact-required', (values, keys, { record }) => { if (!values.email && !values.phone) { const message = 'Either Email or Phone must be provided.' // Add the error to both fields keys.forEach((key) => record.addError(key, message)) } }) ) // In your blueprint file (e.g., workbook.ts) const blueprint = { sheets: [ { name: 'Contacts', slug: 'contacts', fields: [ { key: 'email', type: 'string', label: 'Email' }, { key: 'phone', type: 'string', label: 'Phone' }, ], constraints: [ { type: 'external', validator: 'contact-required', fields: ['email', 'phone'], }, ], }, ], } ``` ## API Reference ### externalConstraint Registers a listener for a field-level custom validation rule. The provided callback function will be executed for every record on each field that has a matching `external` constraint in the blueprint. **Signature:** ```typescript theme={null} externalConstraint( validator: string, cb: ( value: any, key: string, support: { config: any, record: FlatfileRecord, property: Flatfile.Property, event: FlatfileEvent } ) => any | Promise ) ``` **Parameters:** * `validator` (string): The name of the validator. This must match the `validator` property in the field's constraint configuration in the blueprint. * `cb` (function): A callback function that contains the validation logic. It receives: * `value` (any): The value of the cell being validated * `key` (string): The key of the field being validated * `support` (object): An object containing helpful context: * `config` (any): The `config` object from the blueprint constraint * `record` (FlatfileRecord): The full record object, which can be used to get other values or add errors * `property` (Flatfile.Property): The full property (field) definition from the sheet schema * `event` (FlatfileEvent): The raw event that triggered the validation **Error Handling Examples:** ```javascript JavaScript theme={null} // Using record.addError() (Recommended) listener.use( externalConstraint('must-be-positive', (value, key, { record }) => { if (typeof value === 'number' && value <= 0) { record.addError(key, 'Value must be a positive number.') } }) ) // Throwing an Error listener.use( externalConstraint('must-be-positive', (value) => { if (typeof value === 'number' && value <= 0) { throw 'Value must be a positive number.' } }) ) ``` ```typescript TypeScript theme={null} // Using record.addError() (Recommended) listener.use( externalConstraint('must-be-positive', (value, key, { record }) => { if (typeof value === 'number' && value <= 0) { record.addError(key, 'Value must be a positive number.') } }) ) // Throwing an Error listener.use( externalConstraint('must-be-positive', (value) => { if (typeof value === 'number' && value <= 0) { throw 'Value must be a positive number.' } }) ) ``` ### externalSheetConstraint Registers a listener for a sheet-level custom validation rule that involves multiple fields. The callback is executed once per record for each matching `external` constraint in the sheet's top-level `constraints` array. **Signature:** ```typescript theme={null} externalSheetConstraint( validator: string, cb: ( values: Record, keys: string[], support: { config: any, record: FlatfileRecord, properties: Flatfile.Property[], event: FlatfileEvent } ) => any | Promise ) ``` **Parameters:** * `validator` (string): The name of the validator. This must match the `validator` property in the sheet's constraint configuration. * `cb` (function): A callback function that contains the validation logic. It receives: * `values` (Record\): An object where keys are the field keys from the constraint's `fields` array and values are the corresponding cell values for the current record * `keys` (string\[]): An array of the field keys this constraint applies to (from the `fields` property in the blueprint) * `support` (object): An object containing helpful context: * `config` (any): The `config` object from the blueprint constraint * `record` (FlatfileRecord): The full record object * `properties` (Flatfile.Property\[]): An array of the full property (field) definitions for the fields involved in this constraint * `event` (FlatfileEvent): The raw event that triggered the validation **Error Handling Examples:** ```javascript JavaScript theme={null} // Using record.addError() - allows different error messages for different fields listener.use( externalSheetConstraint('date-range', (values, keys, { record }) => { if (values.startDate && values.endDate && values.startDate > values.endDate) { record.addError('startDate', 'Start date must be before end date.') record.addError('endDate', 'End date must be after start date.') } }) ) // Throwing an Error - applies same error message to ALL fields listener.use( externalSheetConstraint('date-range', (values) => { if (values.startDate && values.endDate && values.startDate > values.endDate) { throw 'Start date must be before end date.' } }) ) ``` ```typescript TypeScript theme={null} // Using record.addError() - allows different error messages for different fields listener.use( externalSheetConstraint('date-range', (values, keys, { record }) => { if (values.startDate && values.endDate && values.startDate > values.endDate) { record.addError('startDate', 'Start date must be before end date.') record.addError('endDate', 'End date must be after start date.') } }) ) // Throwing an Error - applies same error message to ALL fields listener.use( externalSheetConstraint('date-range', (values) => { if (values.startDate && values.endDate && values.startDate > values.endDate) { throw 'Start date must be before end date.' } }) ) ``` ## Troubleshooting * **Validator Not Firing:** Ensure the `validator` string in your blueprint constraint exactly matches the string you passed to `externalConstraint` or `externalSheetConstraint` in your listener. * **Constraint Not Recognized:** Double-check that the constraint object in your blueprint has `type: 'external'`. * **Sheet Constraint Issues:** For `externalSheetConstraint`, make sure the sheet-level constraint in the blueprint includes the `fields` array, listing the keys of all fields involved in the validation. ## Notes ### Special Considerations * The plugin fetches and caches the sheet schema (blueprint) once per data submission (`commit:created` event). For very high-frequency operations, this could be a performance consideration, but for most use cases, it is not an issue. * The plugin relies on `@flatfile/plugin-record-hook` to process records in bulk. ### Error Handling Patterns The plugin supports two primary error handling patterns within the validation callback: 1. **Imperative:** Call `record.addError(key, message)` to add an error to a specific field. This is useful for sheet-level constraints where you might want to flag only one of the involved fields. 2. **Declarative:** `throw new Error(message)` or `throw "message"`. The plugin will catch the thrown error. For `externalConstraint`, the error is added to the field being validated. For `externalSheetConstraint`, the same error message is added to *all* fields listed in the constraint's `fields` array. --- # Source: https://flatfile.com/docs/plugins/currency.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Currency Conversion Plugin > Automatically converts currency values from a source currency to a target currency using Open Exchange Rates API with support for historical exchange rates. This plugin automatically converts currency values from a source currency to a target currency for records within a Flatfile Sheet. It utilizes the Open Exchange Rates API to fetch both the latest and historical exchange rates. The primary use case is for processing financial data, such as transaction logs or expense reports, where amounts need to be standardized into a single currency. The plugin can use a date from another field in the record to fetch the correct historical rate for the conversion. It can optionally store the calculated exchange rate and the date of conversion back into the record. The plugin operates by hooking into the record processing lifecycle, making it a seamless part of the data import process. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-convert-currency ``` ## Configuration & Parameters The plugin requires a configuration object with the following parameters: ### Required Parameters | Parameter | Type | Description | | ---------------------- | ------ | ---------------------------------------------------------------------- | | `sheetSlug` | string | The slug of the sheet the plugin should operate on | | `sourceCurrency` | string | The three-letter currency code (e.g., "USD") of the source amounts | | `targetCurrency` | string | The three-letter currency code (e.g., "EUR") to convert the amounts to | | `amountField` | string | The field key/slug that contains the numerical amount to be converted | | `convertedAmountField` | string | The field key/slug where the converted amount will be stored | ### Optional Parameters | Parameter | Type | Description | Default Behavior | | --------------------- | ------ | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------ | | `dateField` | string | The field key/slug containing the date (in YYYY-MM-DD format) for fetching historical exchange rates | Uses current date to fetch latest exchange rates | | `exchangeRateField` | string | The field key/slug where the calculated exchange rate for the conversion will be stored | Exchange rate is not stored on the record | | `conversionDateField` | string | The field key/slug where the timestamp of the conversion will be stored in ISO format | Conversion date is not stored on the record | ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { currencyConverterPlugin } from "@flatfile/plugin-convert-currency"; export default function (listener) { listener.use( currencyConverterPlugin({ sheetSlug: "transactions", sourceCurrency: "USD", targetCurrency: "EUR", amountField: "amount", convertedAmountField: "amountInEUR", }) ); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { currencyConverterPlugin } from "@flatfile/plugin-convert-currency"; export default function (listener: FlatfileListener) { listener.use( currencyConverterPlugin({ sheetSlug: "transactions", sourceCurrency: "USD", targetCurrency: "EUR", amountField: "amount", convertedAmountField: "amountInEUR", }) ); } ``` ### Full Configuration with Historical Rates ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { currencyConverterPlugin } from "@flatfile/plugin-convert-currency"; export default function (listener) { listener.use( currencyConverterPlugin({ sheetSlug: "transactions", sourceCurrency: "USD", targetCurrency: "EUR", amountField: "amount", dateField: "transactionDate", convertedAmountField: "amountInEUR", exchangeRateField: "exchangeRate", conversionDateField: "conversionDate", }) ); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { currencyConverterPlugin } from "@flatfile/plugin-convert-currency"; export default function (listener: FlatfileListener) { listener.use( currencyConverterPlugin({ sheetSlug: "transactions", sourceCurrency: "USD", targetCurrency: "EUR", amountField: "amount", dateField: "transactionDate", convertedAmountField: "amountInEUR", exchangeRateField: "exchangeRate", conversionDateField: "conversionDate", }) ); } ``` ### Using Utility Functions ```javascript JavaScript theme={null} import { validateAmount, validateDate, convertCurrency, calculateExchangeRate } from "@flatfile/plugin-convert-currency"; // Validate an amount const amountResult = validateAmount(150.75); // Returns: { value: 150.75 } // Validate a date const dateResult = validateDate('2023-10-27'); // Returns: { value: '2023-10-27' } // Convert currency const converted = convertCurrency(100, 0.92, 1.0); // Returns: 108.6957 // Calculate exchange rate const rate = calculateExchangeRate(0.92, 0.80); // Returns: 0.869565 ``` ```typescript TypeScript theme={null} import { validateAmount, validateDate, convertCurrency, calculateExchangeRate } from "@flatfile/plugin-convert-currency"; // Validate an amount const amountResult = validateAmount(150.75); // Returns: { value: 150.75 } // Validate a date const dateResult = validateDate('2023-10-27'); // Returns: { value: '2023-10-27' } // Convert currency const converted: number = convertCurrency(100, 0.92, 1.0); // Returns: 108.6957 // Calculate exchange rate const rate: number = calculateExchangeRate(0.92, 0.80); // Returns: 0.869565 ``` ## Troubleshooting ### Common Error Messages | Error | Cause | Solution | | -------------------------------- | ----------------------------------------------- | ------------------------------------------------------------- | | "Invalid source/target currency" | Currency codes are not valid three-letter codes | Check that currency codes are valid and supported by the API | | "Network error" or "Status: 401" | API key issues | Verify `OPENEXCHANGERATES_API_KEY` is correct and not expired | | "Amount must be a valid number" | Invalid amount data | Ensure amount field contains numeric values | | "Invalid date format" | Date not in YYYY-MM-DD format | Ensure date field uses YYYY-MM-DD format | ### Error Handling The plugin handles errors gracefully by attaching them directly to records in Flatfile: * **Validation errors**: Attached to specific fields using `record.addError(fieldName, message)` * **API/Network errors**: Attached as general record errors using `record.addError('general', message)` ## Notes ### Requirements * An active subscription to the Open Exchange Rates API is required * The `OPENEXCHANGERATES_API_KEY` environment variable must be set in your Flatfile Space with your API key ### Limitations * All currency conversions are routed through USD as a base currency due to Open Exchange Rates API limitations on free/lower-tier plans * The `dateField` must contain dates in `YYYY-MM-DD` format only * Converted amounts are fixed to 4 decimal places * Exchange rates are fixed to 6 decimal places ### Default Behavior * When `dateField` is not provided, the plugin uses the current date to fetch the latest exchange rates * When `exchangeRateField` is not provided, the calculated exchange rate is not stored on the record * When `conversionDateField` is not provided, the conversion timestamp is not stored on the record * Empty date fields default to the current date in YYYY-MM-DD format --- # Source: https://flatfile.com/docs/guides/custom-extractors.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Custom Extractors > Build custom file processing plugins to handle unique data formats and transform files into structured data ## What Are Custom Extractors? Custom extractors are specialized plugins that enable you to handle file formats that aren't natively supported by Flatfile's existing [plugins](/plugins). They process uploaded files, extract structured data, and provide that data for mapping into [Sheets](/core-concepts/sheets) as [Records](/core-concepts/records). This guide covers everything you need to know to build custom extractors. Common use cases include: * Legacy system data exports (custom delimited files, fixed-width formats) * Industry-specific formats (healthcare, finance, manufacturing) * Multi-format processors (handling various formats in one extractor) * Binary file handlers (images with metadata, proprietary formats) ## Architecture Overview ### Core Components Custom extractors are built using the `@flatfile/util-extractor` utility, which provides a standardized framework for file processing: ```javascript theme={null} import { Extractor } from "@flatfile/util-extractor"; export const MyCustomExtractor = (options = {}) => { return Extractor(".myformat", "custom", myCustomParser, options); }; ``` Once you've created your extractor, you must register it in a [listener](/core-concepts/listeners) to be used. This will ensure that the extractor responds to the `file:created` [event](/reference/events#file%3Acreated) and processes your files. ```javascript theme={null} // . . . other imports import { MyCustomExtractor } from "./my-custom-extractor"; export default function (listener) { // . . . other listener setup listener.use(MyCustomExtractor()); } ``` ### Handling Multiple File Extensions To support multiple file extensions, use a RegExp pattern: ```javascript theme={null} // Support both .pipe and .custom extensions export const MultiExtensionExtractor = (options = {}) => { return Extractor(/\.(pipe|custom)$/i, "pipe", parseCustomFormat, options); }; // Support JSON variants export const JSONExtractor = (options = {}) => { return Extractor(/\.(json|jsonl|jsonlines)$/i, "json", parseJSONFormat, options); }; ``` ### Key Architecture Elements | Component | Purpose | Required | | ------------------- | -------------------------------------------------------------- | -------- | | **File Extension** | String or RegExp of supported file extension(s) | ✓ | | **Extractor Type** | String identifier for the extractor type | ✓ | | **Parser Function** | Core logic that converts file buffer to structured data | ✓ | | **Options** | Configuration for chunking, parallelization, and customization | - | ### Data Flow 1. **File Upload** → Flatfile receives file with matching extension 2. **Event Trigger** → `file:created` [event](/reference/events#file%3Acreated) fires 3. **Parser Execution** → Your parser function processes the file buffer 4. **Data Structuring** → Raw data is converted to WorkbookCapture format and provided to Flatfile for mapping into [Sheets](/core-concepts/sheets) as [Records](/core-concepts/records) 5. **Job Completion** → Processing status is reported to user ## Getting Started Remember that custom extractors are powerful tools for handling unique data formats. Start with simple implementations and gradually add complexity as needed. ### Prerequisites Install the required packages. You may also want to review our [Coding Tutorial](/coding-tutorial/overview) if you haven't created a [Listener](/core-concepts/listeners) yet. ```bash theme={null} npm install @flatfile/util-extractor @flatfile/listener @flatfile/api ``` ### Basic Implementation Let's create a simple custom extractor for a pipe-delimited format. This will be used to process files with the `.pipe` or `.psv` extension that look like this: ```psv theme={null} name|email|phone John Doe|john@example.com|123-456-7890 Jane Smith|jane@example.com|098-765-4321 ``` ```javascript theme={null} import { Extractor } from "@flatfile/util-extractor"; // Parser function - converts Buffer to WorkbookCapture function parseCustomFormat(buffer) { const content = buffer.toString('utf-8'); const lines = content.split('\n').filter(line => line.trim()); if (lines.length === 0) { throw new Error('Empty file'); } // First line contains headers const headers = lines[0].split('|').map(h => h.trim()); // Remaining lines contain data const data = lines.slice(1).map(line => { const values = line.split('|').map(v => v.trim()); const record = {}; headers.forEach((header, index) => { record[header] = { value: values[index] || '' }; }); return record; }); return { Sheet1: { headers, data } }; } // Create the extractor export const CustomPipeExtractor = (options = {}) => { return Extractor(/\.(pipe|psv)$/i, "pipe", parseCustomFormat, options); }; ``` And now let's import and register it in your [Listener](/core-concepts/listeners). ```javascript theme={null} // . . . other imports import { CustomPipeExtractor } from "./custom-pipe-extractor"; export default function (listener) { // . . . other listener setup listener.use(CustomPipeExtractor()); } ``` That's it! Your extractor is now registered and will be used to process pipe-delimited files with the `.pipe` or `.psv` extension. ## Advanced Examples ### Multi-Sheet Parser Let's construct an Extractor to handle files that contain multiple data sections. This will be used to process files with the `.multi` or `.sections` extension that look like this: ```text theme={null} ---SECTION--- SHEET:Sheet1 name,email,phone John Doe,john@example.com,123-456-7890 Jane Smith,jane@example.com,098-765-4321 ---SECTION--- SHEET:Sheet2 name,email,phone Jane Doe,jane@example.com,123-456-7891 John Smith,john@example.com,098-765-4322 ---SECTION--- ``` ```javascript theme={null} function parseMultiSheetFormat(buffer) { const content = buffer.toString('utf-8'); const sections = content.split('---SECTION---'); const workbook = {}; sections.forEach((section, index) => { if (!section.trim()) return; const lines = section.trim().split('\n'); const sheetName = lines[0].replace('SHEET:', '').trim() || `Sheet${index + 1}`; const headers = lines[1].split(',').map(h => h.trim()); const data = lines.slice(2).map(line => { const values = line.split(',').map(v => v.trim()); const record = {}; headers.forEach((header, idx) => { record[header] = { value: values[idx] || '' }; }); return record; }); workbook[sheetName] = { headers, data }; }); return workbook; } export const MultiSheetExtractor = (options = {}) => { return Extractor(/\.(multi|sections)$/i, "multi-sheet", parseMultiSheetFormat, options); }; ``` Now let's register it in your [Listener](/core-concepts/listeners). ```javascript theme={null} // . . . other imports import { MultiSheetExtractor } from "./multi-sheet-extractor"; export default function (listener) { // . . . other listener setup listener.use(MultiSheetExtractor()); } ``` ### Binary Format Handler This example will be used to process binary files with structured data. This will be used to process binary files with the `.bin` or `.dat` extension. Due to the nature of binary format, we can't easily present a sample import here. ```javascript theme={null} function parseBinaryFormat(buffer) { // Example: Custom binary format with header + records let offset = 0; // Read header (first 16 bytes) const magic = buffer.readUInt32LE(offset); offset += 4; const version = buffer.readUInt16LE(offset); offset += 2; const recordCount = buffer.readUInt32LE(offset); offset += 4; const fieldCount = buffer.readUInt16LE(offset); offset += 2; if (magic !== 0xDEADBEEF) { throw new Error('Invalid file format'); } // Read field definitions const headers = []; for (let i = 0; i < fieldCount; i++) { const nameLength = buffer.readUInt16LE(offset); offset += 2; const name = buffer.toString('utf-8', offset, offset + nameLength); offset += nameLength; const type = buffer.readUInt8(offset); offset += 1; headers.push(name); } // Read records const data = []; for (let i = 0; i < recordCount; i++) { const record = {}; headers.forEach(header => { const valueLength = buffer.readUInt16LE(offset); offset += 2; const value = buffer.toString('utf-8', offset, offset + valueLength); offset += valueLength; record[header] = { value }; }); data.push(record); } return { Sheet1: { headers, data } }; } export const BinaryExtractor = (options = {}) => { return Extractor(/\.(bin|dat)$/i, "binary", parseBinaryFormat, options); }; ``` And, once again, let's register it in your [Listener](/core-concepts/listeners). ```javascript theme={null} // . . . other imports import { BinaryExtractor } from "./binary-extractor"; export default function (listener) { // . . . other listener setup listener.use(BinaryExtractor()); } ``` ### Configuration-Driven Extractor Create a flexible extractor that can be configured for different formats. This will be used to process files in a manner that handles different delimiters, line endings, and other formatting options. ```javascript theme={null} function createConfigurableParser(config) { return function parseConfigurableFormat(buffer) { const content = buffer.toString(config.encoding || 'utf-8'); let lines = content.split(config.lineDelimiter || '\n'); // Skip header lines if specified if (config.skipLines) { lines = lines.slice(config.skipLines); } // Filter empty lines if (config.skipEmptyLines) { lines = lines.filter(line => line.trim()); } if (lines.length === 0) { throw new Error('No data found'); } // Extract headers let headers; let dataStartIndex = 0; if (config.explicitHeaders) { headers = config.explicitHeaders; } else { headers = lines[0].split(config.fieldDelimiter || ',').map(h => h.trim()); dataStartIndex = 1; } // Process data const data = lines.slice(dataStartIndex).map(line => { const values = line.split(config.fieldDelimiter || ','); const record = {}; headers.forEach((header, index) => { let value = values[index] || ''; // Apply transformations if (config.transforms && config.transforms[header]) { value = config.transforms[header](value); } // Type conversion if (config.typeConversion) { if (!isNaN(value) && value !== '') { value = Number(value); } else if (value.toLowerCase() === 'true' || value.toLowerCase() === 'false') { value = value.toLowerCase() === 'true'; } } record[header] = { value }; }); return record; }); return { [config.sheetName || 'Sheet1']: { headers, data } }; }; } export const ConfigurableExtractor = (userConfig = {}) => { const defaultConfig = { encoding: 'utf-8', lineDelimiter: '\n', fieldDelimiter: ',', skipLines: 0, skipEmptyLines: true, typeConversion: false, sheetName: 'Sheet1' }; const config = { ...defaultConfig, ...userConfig }; return Extractor( config.fileExtension || ".txt", "configurable", createConfigurableParser(config), { chunkSize: config.chunkSize || 10000, parallel: config.parallel || 1 } ); }; ``` Now let's register two different configurable extractors in our [Listener](/core-concepts/listeners). The first will be used to process files with the `.custom` extension that look like this, while transforming dates and amount values: ```text theme={null} Extraneous text More extraneous text name & date & amount John Doe & 1/1/2021 & 100.00 Jane Smith & 1/2/2021 & 200.00 ``` The second will be used to process files with the `.pipe` or `.special` extension that look like this: ```text theme={null} Extraneous text More extraneous text name|date|amount John Doe|2021-01-01|100.00 Jane Smith|2021-01-02|200.00 ``` ```javascript theme={null} // . . . other imports import { ConfigurableExtractor } from "./configurable-extractor"; export default function (listener) { // . . . other listener setup // Custom extractor with configuration for .custom files listener.use(ConfigurableExtractor({ fileExtension: ".custom", fieldDelimiter: " & ", skipLines: 2, typeConversion: true, transforms: { 'date': (value) => new Date(value).toISOString(), 'amount': (value) => parseFloat(value).toFixed(2) } })); // Custom extractor with configuration for .pipe and .special files listener.use(ConfigurableExtractor({ fileExtension: /\.(pipe|special)$/i, fieldDelimiter: "|", skipLines: 2, typeConversion: true })); } ``` ## Reference ### API ```typescript theme={null} function Extractor( fileExt: string | RegExp, extractorType: string, parseBuffer: ( buffer: Buffer, options: any ) => WorkbookCapture | Promise, options?: Record ): (listener: FlatfileListener) => void ``` | Parameter | Type | Description | | --------------- | --------------------- | -------------------------------------------------------------------------- | | `fileExt` | `string` or `RegExp` | File extension to process (e.g., `".custom"` or `/\.(custom\|special)$/i`) | | `extractorType` | `string` | Identifier for the extractor type (e.g., "custom", "binary") | | `parseBuffer` | `ParserFunction` | Function that converts Buffer to WorkbookCapture | | `options` | `Record` | Optional configuration object | #### Options | Option | Type | Default | Description | | ----------- | --------- | ------- | -------------------------------------- | | `chunkSize` | `number` | `5000` | Records to process per batch | | `parallel` | `number` | `1` | Number of concurrent processing chunks | | `debug` | `boolean` | `false` | Enable debug logging | #### Parser Function Options Your `parseBuffer` function receives additional options beyond what you pass to `Extractor`: | Option | Type | Description | | ------------------------ | --------- | ------------------------------------------------- | | `fileId` | `string` | The ID of the file being processed | | `fileExt` | `string` | The file extension (e.g., ".csv") | | `headerSelectionEnabled` | `boolean` | Whether header selection is enabled for the space | ### Data Structures #### WorkbookCapture Structure The parser function must return a `WorkbookCapture` object: ```javascript theme={null} const workbookCapture = { "SheetName1": { headers: ["field1", "field2", "field3"], data: [ { field1: { value: "value1" }, field2: { value: "value2" }, field3: { value: "value3" } }, // ... more records ] }, "SheetName2": { headers: ["col1", "col2"], data: [ { col1: { value: "data1" }, col2: { value: "data2" } } ] } }; ``` #### Cell Value Objects Each cell value should use the `Flatfile.RecordData` format: ```javascript theme={null} const recordData = { field1: { value: "john@example.com" }, field2: { value: "John Doe" }, field3: { value: "invalid-email", messages: [ { type: "error", message: "Invalid email format" } ] } }; ``` #### Message Types | Type | Description | UI Effect | | --------- | --------------------- | -------------------------------------------------------------------------------------------- | | `error` | Validation error | Red highlighting, blocks [Actions](/core-concepts/actions) with the `hasAllValid` constraint | | `warning` | Warning message | Yellow highlighting, allows submission | | `info` | Informational message | Mouseover tooltip, allows submission | ### TypeScript Interfaces ```typescript theme={null} type ParserFunction = ( buffer: Buffer, options: any ) => WorkbookCapture | Promise; type WorkbookCapture = Record; type SheetCapture = { headers: string[]; descriptions?: Record | null; data: Flatfile.RecordData[]; metadata?: { rowHeaders: number[] }; }; ``` ## Troubleshooting Common Issues ### Files Not Processing **Symptoms**: Files upload but no extraction occurs **Solutions**: * Verify file extension matches `fileExt` configuration * Check [Listener](/core-concepts/listeners) is properly deployed and running * Enable debug logging to see processing details ```javascript theme={null} const extractor = CustomExtractor({ debug: true }); // Make sure file extensions match in the Extractor call ``` ### Parser Errors **Symptoms**: Jobs fail with parsing errors **Solutions**: * Add try-catch blocks in parser function * Validate input data before processing * Return helpful error messages ```javascript theme={null} function parseCustomFormat(buffer) { try { const content = buffer.toString('utf-8'); if (!content || content.trim() === '') { throw new Error('File is empty'); } // ... parsing logic } catch (error) { throw new Error(`Parse error: ${error.message}`); } } ``` ### Memory Issues **Symptoms**: Large files cause timeouts or memory errors **Solutions**: * Reduce chunk size for large files * Implement streaming for very large files * Use parallel processing carefully ```javascript theme={null} const extractor = CustomExtractor({ chunkSize: 1000, // Smaller chunks parallel: 1 // Reduce parallelization }); ``` ### Performance Problems **Symptoms**: Slow processing, timeouts **Solutions**: * Optimize parser algorithm * Use appropriate chunk sizes * Consider parallel processing for I/O-bound operations ```javascript theme={null} // Optimize for large files const extractor = CustomExtractor({ chunkSize: 5000, parallel: 3 }); ``` --- # Source: https://flatfile.com/docs/plugins/date.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Date Format Normalizer > Automatically parse and standardize date values during data import, converting various date formats into a consistent output format. The Date Format Normalizer plugin for Flatfile is designed to automatically parse and standardize date values during the data import process. Its primary purpose is to detect various common date and time formats within specified fields and convert them into a single, consistent output format. This is useful when importing data from different sources that may use different date conventions (e.g., 'MM/DD/YYYY', 'YYYY-MM-DD', 'Jan 15, 2023'). The plugin can be configured to operate on specific fields across one or all sheets, and it can handle both date-only and date-time values. If a date string cannot be parsed, the plugin adds an error to the corresponding cell, alerting the user to the issue. ## Installation Install the plugin via npm: ```bash theme={null} npm install @flatfile/plugin-validate-date ``` ## Configuration & Parameters The plugin accepts a configuration object with the following parameters: ### sheetSlug * **Type:** `string` (optional) * **Default:** `'**'` (all sheets) * **Description:** The slug of the sheet to which the date normalization should be applied. If this option is omitted, the plugin will apply to all sheets in the workbook. ### dateFields * **Type:** `string[]` (required) * **Description:** An array of field keys (the column names) that contain date values needing normalization. The plugin will process each field listed in this array for every record. ### outputFormat * **Type:** `string` (required) * **Description:** A string defining the desired output format for the dates, following the `date-fns` format patterns (e.g., 'MM/dd/yyyy', 'yyyy-MM-dd HH:mm:ss'). ### includeTime * **Type:** `boolean` (required) * **Description:** A boolean that determines whether to include the time component in the final output. If set to `false`, any time information from the parsed date will be stripped, leaving only the date part. If `true`, the time will be included as formatted by `outputFormat`. ### locale * **Type:** `string` (optional) * **Default:** `'en-US'` (hardcoded) * **Description:** Specifies the locale for date parsing. Note: Although this option exists in the configuration interface, the current implementation hardcodes the locale to 'en-US' and does not use the value provided in this parameter. ## Usage Examples ### Basic Usage This example applies date normalization to the 'start\_date' field on all sheets, converting dates to 'YYYY-MM-DD' format. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener) { listener.use( validateDate({ dateFields: ['start_date'], outputFormat: 'yyyy-MM-dd', includeTime: false }) ) } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener: FlatfileListener) { listener.use( validateDate({ dateFields: ['start_date'], outputFormat: 'yyyy-MM-dd', includeTime: false }) ) } ``` ### Configuration Example This example configures the plugin to run only on the 'contacts' sheet. It normalizes two different date fields, 'birth\_date' and 'registration\_date', to the 'MM/dd/yyyy' format and excludes time. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener) { listener.use( validateDate({ sheetSlug: 'contacts', dateFields: ['birth_date', 'registration_date'], outputFormat: 'MM/dd/yyyy', includeTime: false }) ) } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener: FlatfileListener) { listener.use( validateDate({ sheetSlug: 'contacts', dateFields: ['birth_date', 'registration_date'], outputFormat: 'MM/dd/yyyy', includeTime: false }) ) } ``` ### Advanced Usage (Including Time) This example normalizes the 'event\_timestamp' field to a format that includes both date and time. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener) { listener.use( validateDate({ sheetSlug: 'event_logs', dateFields: ['event_timestamp'], outputFormat: 'yyyy-MM-dd HH:mm:ss', includeTime: true }) ) } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener: FlatfileListener) { listener.use( validateDate({ sheetSlug: 'event_logs', dateFields: ['event_timestamp'], outputFormat: 'yyyy-MM-dd HH:mm:ss', includeTime: true }) ) } ``` ### Error Handling Example If a date string cannot be parsed, the plugin adds an error to the specific cell. For example, if you try to import a record with `due_date: 'not a real date'`, the plugin will not change the value but will attach an error message. ```javascript JavaScript theme={null} // Source Record: // { due_date: 'not a real date' } // After plugin runs, the record in Flatfile will have an error: // Field: 'due_date' // Value: 'not a real date' // Error Message: 'Unable to parse date string' import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener) { listener.use( validateDate({ sheetSlug: 'tasks', dateFields: ['due_date'], outputFormat: 'MM/dd/yyyy', includeTime: false }) ) } ``` ```typescript TypeScript theme={null} // Source Record: // { due_date: 'not a real date' } // After plugin runs, the record in Flatfile will have an error: // Field: 'due_date' // Value: 'not a real date' // Error Message: 'Unable to parse date string' import { FlatfileListener } from '@flatfile/listener' import { validateDate } from '@flatfile/plugin-validate-date' export default function (listener: FlatfileListener) { listener.use( validateDate({ sheetSlug: 'tasks', dateFields: ['due_date'], outputFormat: 'MM/dd/yyyy', includeTime: false }) ) } ``` ## Troubleshooting If dates are not being normalized as expected, consider the following: * **Check Configuration:** Verify that the `sheetSlug` and `dateFields` in the configuration correctly match your workbook setup. * **Validate Format String:** Ensure that the `outputFormat` string is a valid format recognized by `date-fns`. * **Locale Issues:** If a valid date is being marked with an error, it may be in a format not recognized by `chrono-node` or it may conflict with the hardcoded 'en-US' locale (e.g., a DD/MM/YYYY format might be misinterpreted as MM/DD/YYYY). ## Notes ### Default Behavior The plugin hooks into the `commit:created` event. For each committed record, it checks the fields specified in `dateFields`. If a value exists, it attempts to parse it as a date. If successful, it reformats the date according to `outputFormat` and updates the record. If parsing fails, it adds an error message to the cell and leaves the original value unchanged. By default, it operates on all sheets unless a specific `sheetSlug` is provided. ### Special Considerations * The plugin relies on the `chrono-node` library for date parsing, which supports a wide variety of natural language and standard date formats. * The plugin hooks into the `commit:created` event, meaning it runs after a user submits their data and before it is finalized. * The `outputFormat` string must be compatible with the `date-fns` formatting library. ### Limitations * The `locale` configuration option is not currently implemented. The plugin defaults to using the 'en-US' locale for parsing, regardless of the value passed in the configuration. This may affect parsing of formats where the day and month order are ambiguous (e.g., '01/02/2023'). ### Error Handling The plugin's error handling is simple: if `chrono-node` cannot parse the date string from a given field, the function returns `null`. The plugin then calls `record.addError(field, 'Unable to parse date string')` to flag the cell with an error message in the Flatfile UI. The original, un-parsable value is kept in the cell. --- # Source: https://flatfile.com/docs/plugins/dedupe.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Deduplicate Sheet Records > A Flatfile plugin that provides functionality to find and remove duplicate records from a sheet based on specified field values or custom logic. The Dedupe plugin provides functionality to find and remove duplicate records from a sheet within Flatfile. It is designed to be used as a server-side listener that is triggered by a custom action configured on a specific sheet. The primary use case is data cleaning. For example, when importing a list of contacts, you can use this plugin to automatically remove entries that have the same email address. The plugin is flexible, allowing you to specify which field to check for duplicates (e.g., 'email', 'orderId'). You can configure it to keep either the first or the last occurrence of a duplicate record. For more complex deduplication logic, you can provide your own custom function. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-dedupe ``` ## Configuration & Parameters The `dedupePlugin` function takes two parameters: ### Parameters * **jobOperation** (string, required): The operation name that you define in a Sheet-level action. The plugin will only run when an action with this exact operation name is triggered. * **opts** (PluginOptions, required): An object containing the configuration options for the plugin. ### Configuration Options | Option | Type | Default | Description | | -------- | ----------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `on` | string | undefined | The field key (e.g., 'email') to check for duplicate values. Required when using the `keep` option. | | `keep` | 'first' \| 'last' | undefined | Determines which record to keep when a duplicate is found. 'first' keeps the first record encountered, 'last' keeps the last record encountered. | | `custom` | function | undefined | A custom function that receives a batch of records and a set of unique values. Should return an array of record IDs to be deleted. Overrides the `keep` option. | | `debug` | boolean | false | When set to `true`, the plugin may output additional logging for development and debugging purposes. | ### Default Behavior By default, without any configuration in the `opts` object, the plugin will not perform any deduplication. You must provide either the `keep` and `on` options or a `custom` function for the plugin to work. If the `keep` option is used, the `on` option becomes mandatory. ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import { listener } from "./listener-instance"; // Your listener instance import { dedupePlugin } from "@flatfile/plugin-dedupe"; listener.use( dedupePlugin("dedupe-email", { on: "email", keep: "last", }) ); // You also need to configure the action on your Sheet /* const contactsSheet = { name: 'Contacts', // ... other fields actions: [ { operation: "dedupe-email", mode: "background", label: "Dedupe emails", description: "Remove duplicate emails" } ] } */ ``` ```typescript TypeScript theme={null} import { listener } from "./listener-instance"; // Your listener instance import { dedupePlugin } from "@flatfile/plugin-dedupe"; import { Flatfile } from "@flatfile/api"; listener.use( dedupePlugin("dedupe-email", { on: "email", keep: "last", }) ); // You also need to configure the action on your Sheet /* const contactsSheet: Flatfile.SheetConfig = { name: 'Contacts', // ... other fields actions: [ { operation: "dedupe-email", mode: "background", label: "Dedupe emails", description: "Remove duplicate emails" } ] } */ ``` This example configures the plugin to trigger when a sheet action with the operation "dedupe-email" is clicked. It will find duplicates in the 'email' field and keep the last record found, deleting any previous ones. ### Custom Deduplication Logic ```javascript JavaScript theme={null} import { listener } from "./listener-instance"; // Your listener instance import { dedupePlugin } from "@flatfile/plugin-dedupe"; listener.use( dedupePlugin("dedupe-email", { custom: (records) => { const uniques = new Set(); const toDelete = []; records.forEach(record => { const emailValue = record.values["email"]?.value; if (emailValue) { if (uniques.has(emailValue)) { toDelete.push(record.id); } else { uniques.add(emailValue); } } }); return toDelete; }, }) ); ``` ```typescript TypeScript theme={null} import { listener } from "./listener-instance"; // Your listener instance import { dedupePlugin } from "@flatfile/plugin-dedupe"; import { Flatfile } from "@flatfile/api"; listener.use( dedupePlugin("dedupe-email", { custom: (records: Flatfile.RecordsWithLinks) => { const uniques = new Set(); const toDelete: string[] = []; records.forEach(record => { const emailValue = record.values["email"]?.value; if (emailValue) { if (uniques.has(emailValue)) { toDelete.push(record.id); } else { uniques.add(emailValue); } } }); return toDelete; }, }) ); ``` This example shows how to use a custom function for more complex deduplication logic. The custom function identifies records to delete based on the 'email' field and returns their IDs. ### Keep First Record ```javascript JavaScript theme={null} import { dedupePlugin } from "@flatfile/plugin-dedupe"; listener.use( dedupePlugin("dedupe:contacts-email", { on: "email", keep: "first", }) ); ``` ```typescript TypeScript theme={null} import { dedupePlugin } from "@flatfile/plugin-dedupe"; listener.use( dedupePlugin("dedupe:contacts-email", { on: "email", keep: "first", }) ); ``` This example keeps the first record encountered for each unique email value and deletes subsequent duplicates. ## Troubleshooting ### Common Issues **Plugin Not Triggering** * Check for a mismatch between the `jobOperation` string in your listener code and the `operation` value in your Sheet configuration's action. They must be identical. **Incorrect Field Error** * Ensure the field key passed to the `on` option exists in your Sheet configuration and is spelled correctly. **No Duplicates Removed** * Verify your data to ensure duplicates actually exist for the specified `on` field. * If using a `custom` function, add logging to debug its logic. ### Error Scenarios The plugin will throw descriptive errors for common misconfigurations: * **Missing `on` option**: `Error: \`on\` is required when \`keep\` is first\` * **Field not found**: `Error: Field "non_existent_field" not found` * **Invalid context**: `Error: Dedupe must be called from a sheet-level action` ## Notes ### Requirements and Limitations * **Server-Side Requirement**: This plugin must be deployed in a server-side listener. It is not intended for client-side use. * **Sheet Action Requirement**: The plugin is triggered by a job. To trigger this job, you must configure a Sheet-level action in your Sheet configuration. The `operation` property of this action must exactly match the `jobOperation` string passed to the `dedupePlugin` function. * **Large Dataset Limitation**: The `keep: 'last'` option may not function as expected on very large datasets where duplicate records are spread across different pages of data. The `keep: 'first'` option is generally more reliable for large datasets as it correctly tracks unique values across all pages. For a reliable "keep last" implementation on large datasets, a `custom` function should be used. ### Error Handling The plugin is wrapped in the `jobHandler` utility, which provides standardized job management. Any error thrown during the dedupe function's execution will be caught, and the job will be marked as 'failed' with the corresponding error message. The plugin also performs its own configuration checks and will throw descriptive errors for common misconfigurations. --- # Source: https://flatfile.com/docs/plugins/delimited-zip.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Delimited File Zip Exporter > Export data from all sheets within a Flatfile Workbook into delimited text files and compress them into a single ZIP archive for download. This plugin is designed to be used in a server-side Flatfile listener. Its primary purpose is to export data from all sheets within a Flatfile Workbook into delimited text files (such as CSV or TSV). After generating a file for each sheet, it compresses all of them into a single ZIP archive. This ZIP file is then uploaded back into the Flatfile space, making it available for download. This is useful for users who need to download all their processed data from a workbook in a portable, compressed format for use in other systems or for archival purposes. The plugin is triggered by a `job:ready` event. ## Installation Install the plugin via npm: ```bash theme={null} npm install @flatfile/plugin-export-delimited-zip ``` ## Configuration & Parameters The plugin is configured by passing an options object to the `exportDelimitedZip` function. | Parameter | Type | Default | Description | | --------------- | --------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------- | | `job` | `string` | `'downloadDelimited'` | The job name that will trigger the export process. The listener will be configured to listen for `workbook:${job}`. | | `delimiter` | `string` | `','` | The character to use as a delimiter to separate values in the output files. For example, use ',' for CSV or '\t' for TSV. | | `fileExtension` | `string` | `'csv'` | The file extension to use for the generated delimited files (e.g., 'csv', 'txt', 'tsv'). | | `debug` | `boolean` | `false` | When set to true, the plugin will print detailed logs to the console during its execution, which is useful for troubleshooting. | ### Default Behavior By default, the plugin listens for a job named `workbook:downloadDelimited`. When triggered, it will process all sheets in the workbook, convert them to CSV files (using a comma delimiter), zip them up, and upload the final archive. Debug logging is disabled. ## Usage Examples ```javascript theme={null} import { exportDelimitedZip } from '@flatfile/plugin-export-delimited-zip' export default function (listener) { // Using default options for a job named 'downloadDelimited' // that exports to .csv files listener.use(exportDelimitedZip({ job: 'downloadDelimited', delimiter: ',', fileExtension: 'csv' })) } ``` ```typescript theme={null} import type { FlatfileListener } from '@flatfile/listener' import { exportDelimitedZip } from '@flatfile/plugin-export-delimited-zip' export default function (listener: FlatfileListener) { // Using default options for a job named 'downloadDelimited' // that exports to .csv files listener.use(exportDelimitedZip({ job: 'downloadDelimited', delimiter: ',', fileExtension: 'csv' })) } ``` ### Custom Configuration ```javascript theme={null} import { exportDelimitedZip } from '@flatfile/plugin-export-delimited-zip' export default function (listener) { // Custom configuration to create tab-separated files (.tsv) // triggered by a job named 'export-workbook-tsv' // with debug logging enabled. listener.use(exportDelimitedZip({ job: 'export-workbook-tsv', delimiter: '\t', fileExtension: 'tsv', debug: true })) } ``` ```typescript theme={null} import type { FlatfileListener } from '@flatfile/listener' import { exportDelimitedZip } from '@flatfile/plugin-export-delimited-zip' export default function (listener: FlatfileListener) { // Custom configuration to create tab-separated files (.tsv) // triggered by a job named 'export-workbook-tsv' // with debug logging enabled. listener.use(exportDelimitedZip({ job: 'export-workbook-tsv', delimiter: '\t', fileExtension: 'tsv', debug: true })) } ``` ## API Reference ### exportDelimitedZip(options) This function registers a listener plugin that handles the process of exporting workbook data to a compressed ZIP file. It sets up a job handler for a `job:ready` event. When the specified job is executed, the plugin fetches all sheets in the workbook, streams their records, and writes them to local temporary files using the specified delimiter. These files are then added to a ZIP archive, which is uploaded to the Flatfile space. Finally, the temporary files and directory are cleaned up. **Parameters:** * `options` (PluginOptions) - An object containing configuration for the plugin * `job` (string) - The job name to listen for * `delimiter` (string) - The delimiter character for the output files * `fileExtension` (string) - The file extension for the output files * `debug` (boolean, optional) - An optional flag to enable verbose console logging **Return Value:** Returns a `FlatfileListener` plugin instance that can be passed to `listener.use()`. The job itself, when completed successfully, returns an `outcome` object to the Flatfile UI containing a message and a link to the generated ZIP file. ## Troubleshooting The primary method for troubleshooting is to enable the `debug: true` configuration option. This will output detailed step-by-step logs to the console, including retrieved sheets, file paths, record counts, and any caught errors. This provides visibility into where the process might be failing. ### Error Handling If any part of the process fails (e.g., reading sheets, writing temporary files, zipping, or uploading), the function will catch the error and fail the job with a generic message: "This job failed probably because it couldn't write to the \[EXTENSION] files, compress them into a ZIP file, or upload it.". To diagnose the specific cause of failure, set the `debug` option to `true` to see detailed error logs in the console where the listener is running. The core logic is wrapped in a single `try...catch` block. If an error occurs at any stage, it is caught, and the job is marked as failed with a general error message. Specific warnings are logged to the console if the cleanup of temporary files fails, but these do not cause the job to fail. ## Notes ### Requirements and Limitations * **Server-Side Execution**: This plugin must be deployed in a server-side listener environment (e.g., Node.js) as it requires access to the file system (`fs`) to create temporary files and directories. * **Temporary Files**: The plugin writes temporary delimited files and a temporary ZIP file to the operating system's temporary directory (`os.tmpdir()`). It attempts to clean these files up after the upload is complete, but in case of an unhandled crash, temporary files might be left behind. * **File Name Sanitization**: The plugin sanitizes both the workbook name and sheet names to create valid file names. It removes special characters (`[<>:"/\\|?*]`) and replaces spaces with underscores. * **Sheet Name Length**: Sheet names are trimmed to a maximum of 31 characters after sanitization to avoid issues with file system or ZIP format limitations. * **Dependencies**: The plugin relies on external libraries `adm-zip` for creating ZIP archives and `csv-stringify` for generating the delimited file content. --- # Source: https://flatfile.com/docs/plugins/delimiter-extractor.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Delimiter Extractor Plugin > Parse text files with custom delimiters and automatically extract structured data for import into Flatfile The Delimiter Extractor plugin is designed to parse text files that use non-standard delimiters to separate values. Its primary purpose is to automatically detect and extract structured data from these files when they are uploaded to Flatfile. It supports a variety of single-character delimiters such as `;`, `:`, `~`, `^`, and `#`. This plugin is useful in scenarios where data is provided in custom formats that are not natively handled by the Flatfile platform's standard CSV, TSV, or PSV parsers. It operates within a server-side listener, triggering on the `file:created` event to process the file, identify headers, and structure the data into records for import. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-delimiter-extractor ``` ## Configuration & Parameters ### Required Parameters The file extension (e.g., ".txt", ".dat") that the plugin should listen for. This is the first argument to the `DelimiterExtractor` function. The character used to separate values in the file. Supported delimiters are `;`, `:`, `~`, `^`, `#`. ### Optional Parameters If set to `true`, the plugin will attempt to convert numeric and boolean strings into their corresponding types. For example, "123" becomes `123` and "true" becomes `true`. Controls how empty lines in the file are handled: * `true`: Skips lines that are completely empty * `'greedy'`: Skips lines that contain only whitespace characters * `false`: Includes all lines, even empty ones A function that is applied to each individual cell value during parsing. The return value of the function will replace the original value. This is applied before `dynamicTyping`. The number of records to process in each batch or chunk when inserting data into Flatfile. The number of chunks to process concurrently. An advanced configuration object to control the header detection strategy. Allows for specifying explicit headers, looking for headers in specific rows, or using different detection algorithms. Default uses the 'default' algorithm, which selects the row with the most non-empty cells within the first 10 rows as the header. An array of delimiter characters to try if a specific `delimiter` is not provided. The parser will use the first one that successfully parses the data. Enables debug logging. ## Usage Examples ### Basic Usage Configure the listener to use the plugin for any `.txt` file, specifying that the data is separated by a colon: ```javascript JavaScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; listener.use(DelimiterExtractor(".txt", { delimiter: ":" })); ``` ```typescript TypeScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; listener.use(DelimiterExtractor(".txt", { delimiter: ":" })); ``` ### Advanced Configuration This example shows a more detailed configuration for `.data` files with type conversion, empty line handling, and value transformation: ```javascript JavaScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; const options = { delimiter: "#", dynamicTyping: true, skipEmptyLines: 'greedy', transform: (value) => { if (typeof value === 'string') { return value.toUpperCase(); } return value; }, }; listener.use(DelimiterExtractor(".data", options)); ``` ```typescript TypeScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; const options = { delimiter: "#", dynamicTyping: true, skipEmptyLines: 'greedy' as const, transform: (value: any) => { if (typeof value === 'string') { return value.toUpperCase(); } return value; }, }; listener.use(DelimiterExtractor(".data", options)); ``` ### Custom Header Detection This example demonstrates how to use advanced header detection options to explicitly define headers: ```javascript JavaScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; const advancedOptions = { delimiter: "~", headerDetectionOptions: { algorithm: 'explicitHeaders', headers: ['product_id', 'product_name', 'quantity', 'price'], skip: 1 // Skip the first row in the file } }; listener.use(DelimiterExtractor(".inv", advancedOptions)); ``` ```typescript TypeScript theme={null} import { listener } from "@flatfile/platform"; import { DelimiterExtractor } from "@flatfile/plugin-delimiter-extractor"; const advancedOptions = { delimiter: "~", headerDetectionOptions: { algorithm: 'explicitHeaders' as const, headers: ['product_id', 'product_name', 'quantity', 'price'], skip: 1 // Skip the first row in the file } }; listener.use(DelimiterExtractor(".inv", advancedOptions)); ``` ### Direct Buffer Parsing For advanced use cases where you need to parse a buffer directly: ```javascript JavaScript theme={null} import * as fs from 'fs'; import { delimiterParser } from "@flatfile/plugin-delimiter-extractor"; async function parseLocalFile() { const fileBuffer = fs.readFileSync('my-data.txt'); const options = { delimiter: '|', dynamicTyping: true }; const workbookData = await delimiterParser(fileBuffer, options); console.log(workbookData.Sheet1.headers); console.log(workbookData.Sheet1.data[0]); } parseLocalFile(); ``` ```typescript TypeScript theme={null} import * as fs from 'fs'; import { delimiterParser } from "@flatfile/plugin-delimiter-extractor"; async function parseLocalFile(): Promise { const fileBuffer = fs.readFileSync('my-data.txt'); const options = { delimiter: '|', dynamicTyping: true }; const workbookData = await delimiterParser(fileBuffer, options); console.log(workbookData.Sheet1.headers); console.log(workbookData.Sheet1.data[0]); } parseLocalFile(); ``` ## Troubleshooting ### No Data Appears After Upload If a file is uploaded but no data appears, check the following: 1. **File Extension**: Ensure the file extension matches the one configured in `DelimiterExtractor(fileExt, ...)` 2. **Delimiter**: Verify that the `delimiter` option matches the actual delimiter used in the file 3. **Empty Files**: If the file is empty or contains no parsable data, the plugin will log "No data found in the file" to the console and produce no records ### Unsupported File Types Error ```javascript JavaScript theme={null} try { // This will throw an error const csvExtractor = DelimiterExtractor(".csv", { delimiter: "," }); } catch (e) { console.error(e.message); // -> ".csv is a native file type and not supported by the delimiter extractor." } ``` ```typescript TypeScript theme={null} try { // This will throw an error const csvExtractor = DelimiterExtractor(".csv", { delimiter: "," }); } catch (e: any) { console.error(e.message); // -> ".csv is a native file type and not supported by the delimiter extractor." } ``` ## Notes ### Limitations * The plugin explicitly does not support file types that are natively handled by Flatfile: `.csv` (comma-separated), `.tsv` (tab-separated), and `.psv` (pipe-separated) * The list of supported delimiters is fixed to: `;`, `:`, `~`, `^`, `#` * This plugin is intended to be run in a server-side listener environment within the Flatfile Platform ### Error Handling * The main `DelimiterExtractor` function includes a guard clause that throws an `Error` if an unsupported native file type is provided * The internal parsing function uses try-catch blocks to handle parsing errors, which are logged to the console and re-thrown, causing the associated Flatfile job to fail ### Default Behavior * By default, the plugin does not perform type conversion (`dynamicTyping: false`) * Empty lines are included in the output unless explicitly configured otherwise (`skipEmptyLines: false`) * The plugin processes 10,000 records per chunk with no parallel processing (`chunkSize: 10000`, `parallel: 1`) * Header detection uses the 'default' algorithm, selecting the row with the most non-empty cells within the first 10 rows --- # Source: https://flatfile.com/docs/core-concepts/documents.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Documents > Standalone webpages within Flatfile Spaces for guidance and dynamic content Documents are standalone webpages for your Flatfile [Spaces](/core-concepts/spaces). They can be rendered from [Markdown syntax](https://www.markdownguide.org/basic-syntax/). Often used for getting started guides, Documents become extremely powerful with dynamically generated content that stays updated as Events occur. Flatfile also allows you to use HTML tags in your Markdown-formatted text. This is helpful if you prefer certain HTML tags rather than Markdown syntax. Links in documents (both Markdown and HTML) automatically open in a new tab to ensure users don't navigate away from the Flatfile interface. ## Key Features **A note on Documents:** While Documents themselves can be created and updated [dynamically](/core-concepts/documents#dynamic-content), the content inside of a document should be considered to be *static* - that is, you cannot use documents to host interactive elements or single-page webforms. For that sort of functionality, we recommend using [Actions](/core-concepts/actions) to trigger a [Listener](/core-concepts/listeners) to perform the desired functionality. ### Markdown-Based Content Documents support GitHub-flavored Markdown, allowing you to create rich, formatted content with headers, lists, code blocks, and more. You can also use HTML tags within your Markdown for additional formatting flexibility. ### Dynamic Content Documents can be created and updated programmatically in response to Events, enabling dynamic content that reflects the current state of your Space or data processing workflow. ### Document Actions Add interactive buttons to your Documents that trigger custom operations. [Actions](/core-concepts/actions) appear in the top right corner and can be configured with different modes, confirmations, and tooltips. ### Embedded Blocks Documents support embedding interactive data blocks (Workbooks, Sheets, and Diffs) directly within the content. See the [Adding Blocks to Documents](#adding-blocks-to-documents) section for detailed implementation. ## Create a Document You can create Documents upon Space creation using the [Space Configure Plugin](/plugins/space-configure), or dynamically in a [Listener](/core-concepts/listeners) using the API: ```javascript theme={null} import api from "@flatfile/api"; export default function flatfileEventListener(listener) { listener.on("file:created", async ({ context: { spaceId, fileId } }) => { const fileName = (await api.files.get(fileId)).data.name; const bodyText = "# Welcome\n" + "### Say hello to your first customer Space in the new Flatfile!\n" + "Let's begin by first getting acquainted with what you're seeing in your Space initially.\n" + "---\n" + "Your uploaded file, ${fileName}, is located in the Files area."; const doc = await api.documents.create(spaceId, { title: "Getting Started", body: bodyText, }); }); } ``` This Document will now appear in the sidebar of your Space. Learn how to [customize the guest sidebar](/guides/customize-guest-sidebar) for different user types. In this example, we create a Document when a file is uploaded, but you can also create Documents in response to any other Event. [Read more](/reference/events) about the different Events you can respond to. ## Document Actions Actions are optional and allow you to run custom operations in response to a user-triggered event from within a Document. Define Actions on a Document using the `actions` parameter when a document is created: ```javascript theme={null} import api from "@flatfile/api"; export default function flatfileEventListener(listener) { listener.on("file:created", async ({ context: { spaceId, fileId } }) => { const fileName = (await api.files.get(fileId)).data.name; const bodyText = "# Welcome\n" + "### Say hello to your first customer Space in the new Flatfile!\n" + "Let's begin by first getting acquainted with what you're seeing in your Space initially.\n" + "---\n" + "Your uploaded file, ${fileName}, is located in the Files area."; const doc = await api.documents.create(spaceId, { title: "Getting Started", body: bodyText, actions: [ { label: "Submit", operation: "contacts:submit", description: "Would you like to submit the contact data?", tooltip: "Submit the contact data", mode: "foreground", primary: true, confirm: true, }, ], }); }); } ``` Then configure your listener to handle this Action, and define what should happen in response. Read more about Actions and how to handle them in our [Using Actions guide](/guides/using-actions). Actions appear as buttons in the top right corner of your Document. ## Document treatments Documents have an optional `treatments` parameter which takes an array of treatments for your Document. Treatments can be used to categorize your Document. Certain treatments will cause your Document to look or behave differently. ### Ephemeral documents Giving your Document a treatment of `"ephemeral"` will cause the Document to appear as a full-screen takeover, and it will not appear in the sidebar of your Space like other Documents. You can use ephemeral Documents to create a more focused experience for your end users. ```javascript theme={null} const ephemeralDoc = await api.documents.create(spaceId, { title: "Getting started", body: "# Welcome ...", treatments: ["ephemeral"], }); ``` Currently, `"ephemeral"` is the only treatment that will change the behavior of your Document. ## Adding Blocks to Documents Blocks are dynamic, embedded entities that you can use to display data inside a Document. You can add a Block to a Document using the `` HTML entity in your markdown and specifying which Block type you want to show using the `type` attribute on the entity. Three Block types are currently supported: Embedded Workbook, Embedded Sheet, and Embedded Diff. ### Embedded Workbook Use this Block to render an entire Workbook with all its Sheets inside a Document, providing users with tabbed navigation between sheets. You can embed a Workbook by passing a workbook ID and optional name. You can also control whether the embedded Workbook is expanded when the document loads and whether to show the header. ```javascript theme={null} const doc = await api.documents.create(spaceId, { title: "Getting started", body: "# Welcome\n" + "\n" + "Here is an embedded Workbook:\n" + "\n" + "\n" + "\n" + "Here is another embedded Workbook without header:\n" + "\n" + "", }); ``` **Properties:** * `workbookId` (required): The ID of the workbook to embed * `name` (optional): Display name for the embedded workbook * `defaultExpanded` (optional): Whether the workbook is expanded when the document loads (defaults to false) * `showHeader` (optional): Whether to show the workbook header (defaults to true). When false, the workbook is automatically expanded ### Embedded Sheet Use this Block to render a Sheet along with all its data inside of a Document. You can embed a Sheet into your Document by passing a sheet ID, workbook ID, and name. You can also specify whether the embedded Sheet is expanded or collapsed when the document is loaded, and whether to show the header. You can include as many embedded Sheets in your Document as you like, but end users will only be able to expand a maximum of 10 embedded Sheets at once. ```javascript theme={null} const doc = await api.documents.create(spaceId, { title: "Getting started", body: "# Welcome\n" + "\n" + "Here is an embedded Sheet:\n" + "\n" + "\n" + "\n" + "Here is another embedded Sheet without header:\n" + "\n" + "", }); ``` **Properties:** * `sheetId` (required): The ID of the sheet to embed * `workbookId` (required): The ID of the workbook containing the sheet * `name` (optional): Display name for the embedded sheet * `defaultExpanded` (optional): Whether the sheet is expanded when the document loads (defaults to false) * `showHeader` (optional): Whether to show the sheet header (defaults to true). When false, the sheet is automatically expanded ### Embedded Diff Use this Block to show a side-by-side comparison of the data in a Sheet now versus at a previous point in time as captured by a Snapshot. Pass a Sheet ID, Workbook ID, and Snapshot ID. You can optionally pass a `direction` attribute which specified whether the changes are displayed with the Snapshot as the end state (`sheet_to_snapshot`) or the Sheet as the end state (`snapshot_to_sheet`). The default value for direction is `sheet_to_snapshot`. Use `direction="sheet_to_snapshot"` if you want to show changes that have been made since the time the Snapshot was taken, i.e. to review past changes. Use `direction="snapshot_to_sheet"` if to preview the changes that would occur if you were to revert your Sheet back to the state it was in when the Snapshot was taken. ```javascript theme={null} const doc = await api.documents.create(spaceId, { title: 'Getting started', body: "# Welcome\n" + "\n" + "Here is an embedded Diff:\n" + "\n" + " { return columnName.replace(/([A-Z])/g, ' $1').replace(/^./, str => str.toUpperCase()); } }) ); } ``` ### Workbook Configuration Add a download action to your workbook to trigger the export: ```javascript theme={null} { name: "Customer Data", actions: [ { operation: "downloadWorkbook", mode: "foreground", label: "Download Excel File", description: "Export all data as Excel spreadsheet", primary: true } ], sheets: [ // your sheet definitions ] } ``` When users click the download action, the plugin generates an Excel file containing all workbook data and either triggers an automatic download or directs users to the Files page to download manually. For complete configuration options and advanced usage, see the [Export Workbook Plugin documentation](/plugins/export-workbook). ## Example: Custom XML File Generation For custom file formats, you can build files programmatically using any TypeScript library. This example demonstrates creating XML files using the `xml2js` library to generate structured data exports. ### Installation ```bash theme={null} npm install xml2js npm install @types/xml2js --save-dev ``` ### Implementation ```javascript theme={null} import api from "@flatfile/api"; import { FlatfileListener } from "@flatfile/listener"; import { Builder } from 'xml2js'; import fs from 'fs'; export default function (listener) { listener.on( "job:ready", { job: "workbook:downloadXML" }, async ({ context: { jobId, workbookId } }) => { try { await api.jobs.ack(jobId, { info: "Starting XML generation...", progress: 10, }); // Get workbook and find the target sheet const workbook = await api.workbooks.get(workbookId); const sheet = workbook?.data.sheets?.find(sheet => sheet.slug === "customers"); const records = await api.records.get(sheet?.id || ""); await api.jobs.ack(jobId, { info: "Processing records...", progress: 30, }); // Initialize XML builder const builder = new Builder({ xmldec: { version: '1.0', encoding: 'UTF-8' } }); // Transform records to structured data const customers = records.data.records.map(record => { const values = record.values; return { id: values.id?.value || '', name: values.name?.value || '', email: values.email?.value || '', phone: values.phone?.value || '', address: { street: values.street?.value || '', city: values.city?.value || '', state: values.state?.value || '', zip: values.zip?.value || '' } }; }); // Create XML structure const xmlObj = { export: { $: { generated: new Date().toISOString(), recordCount: customers.length }, customers: { customer: customers } } }; // Generate XML string const xml = builder.buildObject(xmlObj); await api.jobs.ack(jobId, { info: "Creating XML file...", progress: 70, }); // Write XML to temporary file const fileName = `customer_export_${new Date().toISOString().split('T')[0]}.xml`; fs.writeFileSync(fileName, xml); // Upload file to Flatfile const file = fs.createReadStream(fileName); const fileUpload = await api.files.upload(file, { spaceId: workbook.data.spaceId, environmentId: workbook.data.environmentId, }); // Complete job with download link await api.jobs.complete(jobId, { outcome: { message: "XML file generated successfully", next: { type: "files", label: "Download XML", files: [{ fileId: fileUpload?.data?.id }], }, }, }); // Clean up temporary file fs.unlinkSync(fileName); } catch (error) { await api.jobs.fail(jobId, { outcome: { message: `Failed to generate XML: ${error.message}`, }, }); } } ); } ``` ### Workbook Configuration Add an XML download action to trigger the export: ```javascript theme={null} { name: "Customer Data", actions: [ { operation: "downloadXML", mode: "foreground", label: "Download XML", description: "Export data as XML file", primary: false } ], sheets: [ { name: "Customers", slug: "customers", fields: [ { key: "id", type: "string", label: "Customer ID" }, { key: "name", type: "string", label: "Name" }, { key: "email", type: "string", label: "Email" }, { key: "phone", type: "string", label: "Phone" }, { key: "street", type: "string", label: "Street" }, { key: "city", type: "string", label: "City" }, { key: "state", type: "string", label: "State" }, { key: "zip", type: "string", label: "ZIP Code" } ] } ] } ``` This pattern can be adapted for any file format by substituting the appropriate library and transformation logic. The key steps remain the same: fetch data, transform it, generate the file, upload it to Flatfile, and provide a download link to users. --- # Source: https://flatfile.com/docs/plugins/email.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Email Validation Plugin > Validate email addresses in Flatfile data imports with format checking, required field validation, and disposable domain blocking The Email Validation plugin for Flatfile provides a convenient way to validate email addresses in your data. It integrates into the Flatfile Listener as a record hook, automatically checking specified fields on each record. The plugin's core functionalities include validating the email format, checking for required values, and blocking emails from disposable or temporary domains. It is highly configurable, allowing you to specify which fields and sheets to target, provide a custom list of disposable domains, and customize the error messages shown to the user for different validation failures. This is useful for ensuring data quality for contact lists, user sign-ups, and any dataset containing email addresses. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-validate-email ``` ## Configuration & Parameters The plugin is configured with a single object passed to the `validateEmail` function. ### sheetSlug * **Type:** `string` * **Required:** No * **Default:** `'**'` * **Description:** The slug of the sheet to apply the validation to. The default value `'**'` means the plugin will run on all sheets in the workspace. ### emailFields * **Type:** `string[]` * **Required:** Yes * **Description:** An array of strings, where each string is the field key (or column name) that should be validated as an email. ### disposableDomains * **Type:** `string[]` * **Required:** No * **Default:** `[]` (empty array) * **Description:** A custom list of email domains that should be considered disposable and rejected. The comparison is case-insensitive. ### errorMessages * **Type:** `object` * **Required:** No * **Description:** An object to override the default error messages. The available keys are: * `required`: Message for a missing email value. Default: "Email is required" * `invalid`: Message for an improperly formatted email. Default: "Invalid email format" * `disposable`: Message for an email from a blocked domain. Default: "Disposable email addresses are not allowed" * `domain`: Not currently used by the plugin ## Usage Examples ### Basic Usage This example validates the 'email' field in all sheets with default error messages. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener) { listener.use(validateEmail({ emailFields: ['email'] })); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener: FlatfileListener) { listener.use(validateEmail({ emailFields: ['email'] })); } ``` ### Configuration with Custom Messages This example validates two email fields, provides custom error messages, and applies the validation only to the 'contacts' sheet. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener) { listener.use(validateEmail({ sheetSlug: 'contacts', emailFields: ['primary_email', 'secondary_email'], errorMessages: { required: 'Please enter an email.', invalid: 'The email you entered is not valid.', disposable: 'Temporary email addresses are not permitted.' } })); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener: FlatfileListener) { listener.use(validateEmail({ sheetSlug: 'contacts', emailFields: ['primary_email', 'secondary_email'], errorMessages: { required: 'Please enter an email.', invalid: 'The email you entered is not valid.', disposable: 'Temporary email addresses are not permitted.' } })); } ``` ### Advanced Usage with Disposable Domain Blocking This example demonstrates using a custom list of disposable domains to block. ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener) { const blockedDomains = ['mailinator.com', 'temp-mail.org', '10minutemail.com']; listener.use(validateEmail({ emailFields: ['email'], disposableDomains: blockedDomains, errorMessages: { disposable: 'This email provider is not allowed. Please use a permanent email address.' } })); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { validateEmail } from '@flatfile/plugin-validate-email'; export default function (listener: FlatfileListener) { const blockedDomains = ['mailinator.com', 'temp-mail.org', '10minutemail.com']; listener.use(validateEmail({ emailFields: ['email'], disposableDomains: blockedDomains, errorMessages: { disposable: 'This email provider is not allowed. Please use a permanent email address.' } })); } ``` ## API Reference ### validateEmail(config) A factory function that creates a Flatfile record hook. The hook validates specified fields on a record to ensure they are properly formatted and non-disposable email addresses. **Parameters:** * `config` (object): A configuration object with the following properties: * `sheetSlug` (string, optional): The slug of the sheet to target. Defaults to `'**'` to target all sheets. * `emailFields` (string\[], required): An array of field keys to validate. * `disposableDomains` (string\[], optional): A list of domains to reject. * `errorMessages` (object, optional): An object for custom error messages. **Return Value:** Returns a `FlatfileListener` instance configured with the record hook, which can be passed to `listener.use()`. ## Troubleshooting * **Validation not triggering:** Verify that the `sheetSlug` in the configuration correctly matches the slug of your target Sheet. * **Fields not being validated:** Ensure the strings in the `emailFields` array exactly match the field keys in your Sheet configuration. * **Plugin not working:** Confirm that the plugin is correctly registered with a Flatfile listener using `listener.use()`. ## Notes ### Default Behavior By default, the plugin applies to all sheets. If `errorMessages` are not provided, it uses its own internal messages for required, invalid, and disposable email errors. If `disposableDomains` is not provided, it will not perform any disposable domain checks, only format and presence validation. ### Special Considerations * This plugin is a "Record Hook" type, meaning it runs on every record as it is processed. * The validation logic uses a regular expression for format checking. While robust, it may not cover 100% of all edge-case email address formats defined in RFCs. * The `domain` key within the `errorMessages` configuration object is defined in the type interface but is not currently used in the validation logic. * The domain check for disposable emails is case-insensitive. ### Error Handling * The plugin does not throw errors or stop the import process. * Validation failures are handled by adding an error message to the specific field on the `FlatfileRecord` using the `record.addError()` method. * This approach allows users to see all data issues inline within the Flatfile interface and correct them before finalizing the import. --- # Source: https://flatfile.com/docs/core-concepts/environments.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Environments > Use Environments for testing and authentication Environments are isolated entities and are intended to be a safe place to create and test different configurations. Environments serve as self-contained, secure domains where diverse configurations can be created and tested. By default, a development and a production environment are set up. | isProd | Name | Description | | ------- | ----------- | -------------------------------------------------------------------------------------------- | | *false* | development | Use this default environment, and it's associated test API keys, as you build with Flatfile. | | *true* | production | When you're ready to launch, create a new environment and swap out your keys. | The development environment does not count towards your paid credits. ## Listeners [Listeners](/core-concepts/listeners) are functions you write that respond to Events by executing custom code. They enable all the powerful functionality in your Flatfile implementation: data transformations, validations, integrations, and workflows. Each Listener - whether run locally or deployed as an [Agent](/core-concepts/listeners#agents) – connects to a specific [Environment](/core-concepts/environments) using that Environment's ID and API key as environment variables. To switch between different Environments (or promote a development Environment to production), you simply update your environment variables with the corresponding Environment ID and API key. By default, Listeners respond to Events from all [Apps](/core-concepts/apps) within an [Environment](/core-concepts/environments). You can use [Namespaces](/guides/namespaces-and-filters#namespaces) to partition your Listeners into isolated functions that only respond to Events from specific Apps. ## Creating an Environment Your `publishableKey` and `secretKey` are specific to an environment therefore to create a new Environment, you'll have to use a personal access token. 1. Open Settings 2. Click to Personal Tokens 3. You can use the key pair in there to create an access token like: ```bash theme={null} curl -X POST https://platform.flatfile.com/v1/auth -H 'Content-Type: application/json' -d '{"clientId":"1234-1234", "secret":"1234-1234"}' ``` 4. The response will include an `accessToken`. You can present that as your **Bearer `token`** in place of the `secretKey`. [Or click here to create an environment in the dashboard](https://platform.flatfile.com/dashboard) ## Guest Authentication Environments support two types of guest authentication: 1. `magic_link`: This method dispatches an email to your guests, which includes a magic link to facilitate login. 2. `shared_link`: This method transforms the Space URL into a public one, typically used in conjunction with embedded Flatfile. ### Additional Info Should the `guestAuthentication` be left unspecified, both `magic_link` and `shared_link` types are enabled by default. It's important to note that `guestAuthentication` settings can be applied at both Environment and Space levels. However, in case of any conflicting settings, the authentication type set at the Space level will take precedence over the Environment level setting. This flexibility enables customization based on specific needs, ensuring the right balance of accessibility and security. ## Secret and publishable keys All Accounts have two key types for each environment. Learn when to use each type of key: | Type | Id | Description | | --------------- | --------------------- | ----------------------------------------------------------------------------------------------------------------------- | | Secret key | sk\_23ghsyuyshs7dcrty | **On the server-side:** Store this securely in your server-side code. Don't expose this key in an application. | | Publishable key | pk\_23ghsyuyshs7dcert | **On the client-side:** Can be publicly-accessible in your application's client-side code. Use when embedding Flatfile. | The publishable key only has permissions to create a Space. --- # Source: https://flatfile.com/docs/reference/events.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Event Reference > Complete reference for all Flatfile events, their payloads, and when they are triggered Flatfile emits events throughout the data import lifecycle, allowing your applications to respond to user actions, system changes, and processing results. This reference documents all available events, their payloads, and when they are triggered. ## Event Structure All Flatfile events follow a consistent structure. Optional fields may be included depending on the event's Domain ([workbook-level](#workbook-events) events, for instance, won't have a sheetId) ```typescript theme={null} interface FlatfileEvent { id: string; topic: string; domain: string; context: { environmentId: string; spaceId?: string; workbookId?: string; sheetId?: string; jobId?: string; fileId?: string; [key: string]: any; }; payload: any; attributes?: any; createdAt: string; } ``` ## Listening and Reacting to Events To respond to these events, you'll need to create a [Listener](/core-concepts/listeners) that subscribes to the specific events your application needs to handle. ## Job Events Job events are triggered when background tasks and operations change state. ### job:created Triggered when a new job is first created. Some jobs will enter an optional planning state at this time. A job with 'immediate' set to true will skip the planning step and transition directly to 'ready.' ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ### job:ready Triggered when a job is ready for execution by your listener. Either the job has a complete plan of work or the job is configured to not need a plan. This is the only event most job implementations will care about. Once a ready job is acknowledged by a listener, it transitions into an executing state. ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ```typescript theme={null} listener.filter({ job: "*" }, (configure) => { configure.on("job:ready", async (event) => { const { jobId } = event.context // Handle any job that becomes ready await processJob(jobId) }) }) ``` ### job:scheduled Triggered when a job is scheduled to run at a future time ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status (scheduled) info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ### job:updated Triggered when a job is updated. For example, when a listener updates the state or progress of the job. The event will emit many times as the listener incrementally completes work and updates the job. ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ### job:completed Triggered when a job has completed successfully ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ### job:outcome-acknowledged Triggered to trigger workflow actions after the user has acknowledged that the job has completed or failed. Background jobs will skip this step. ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string workbookId?: string sheetId?: string jobId: string actorId: string } ``` ### job:parts-completed Triggered when all parts of a multi-part job have completed processing ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status (parts-completed) info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data parts: Array<{ // completed parts information partId: string status: string completedAt: string }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ### job:failed Triggered when a job fails ```typescript theme={null} { domain: string // event domain (e.g., "space", "workbook") operation: string // operation name (e.g., "configure") job: string // domain:operation format (e.g., "space:configure") status: string // job status info?: string // optional info message isPart: boolean // whether this is a sub-job input?: any // job input data } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId?: string workbookId?: string sheetId?: string jobId: string actorId: string } ``` ### job:deleted Triggered when a job is deleted ```typescript theme={null} { accountId: string environmentId: string spaceId?: string jobId: string actorId: string } ``` ## Program Events Program events are triggered when mapping programs and transformations change state. ### program:created Triggered when a new mapping program is created ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### program:updated Triggered when a mapping program is updated ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### program:recomputing Triggered when a mapping program begins recomputing its transformations ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### program:recomputed Triggered when a mapping program has finished recomputing its transformations ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ## File Events File events are triggered when files are uploaded, processed, or modified. ### file:created Triggered when a file upload begins or a new file is created ```typescript theme={null} { accountId: string environmentId: string spaceId: string fileId: string actorId: string } ``` ### file:updated Triggered when a file is updated. For example, when a file has been extracted into a workbook ```typescript theme={null} { accountId: string environmentId: string spaceId: string fileId: string actorId: string } ``` ```typescript theme={null} { status: string workbookId?: string } ``` ### file:deleted Triggered when a file is deleted ```typescript theme={null} { accountId: string environmentId: string spaceId: string fileId: string actorId: string } ``` ### file:expired Triggered when a file is expired ```typescript theme={null} { accountId: string environmentId: string spaceId: string fileId: string } ``` ## Record Events Record events are triggered when data records are created, updated, or deleted. ### records:created Triggered when new records are added to a sheet ```typescript theme={null} { sheetId: string recordIds: string[] recordCount: number } ``` ```typescript theme={null} { environmentId: string spaceId: string workbookId: string sheetId: string } ``` ### records:updated **A note on `commit:created` vs `records:updated`** You might think you want to listen to `records:updated` for data processing, but **[`commit:created`](#commit-events) is the recommended choice for most automation**. A **commit** is like a "git commit" but for data - a versioned snapshot of all changes that occurred together in a single operation. This provides complete transaction context and better performance when processing multiple record changes. Flatfile's own Plugins ([Record Hooks](/plugins/record-hook), [Constraints](/plugins/constraints), [Autocast](/plugins/autocast), etc.) all use `commit:created`. Reserve `records:updated` only for real-time per-record feedback scenarios. Triggered when existing records are modified ```typescript theme={null} { sheetId: string recordIds: string[] changes: Array<{ recordId: string fieldKey: string previousValue: any newValue: any }> } ``` ### records:deleted Triggered when records are deleted from a sheet ```typescript theme={null} { sheetId: string recordIds: string[] recordCount: number } ``` ## Sheet Events Sheet events are triggered when sheets are created, modified, or when sheet-level operations occur. ### sheet:calculation-updated Triggered when sheet calculations or formulas are updated ```typescript theme={null} { sheetId: string workbookId: string calculations: Array<{ field: string formula: string updated: boolean }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### sheet:counts-updated Triggered when record counts for a sheet are updated ```typescript theme={null} { sheetId: string workbookId: string counts: { total: number valid: number error: number updated: number } } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### sheet:created Triggered when a new sheet is created ```typescript theme={null} { sheetId: string workbookId: string name: string slug: string fieldCount: number } ``` ### sheet:updated Triggered when a sheet Blueprint or configuration is modified ```typescript theme={null} { sheetId: string workbookId: string changes: Array<{ type: "field_added" | "field_removed" | "field_updated" | "config_updated" details: any }> } ``` ### sheet:deleted Triggered when a sheet is deleted ```typescript theme={null} { sheetId: string workbookId: string name: string slug: string } ``` ## Workbook Events Workbook events are triggered for workbook-level operations and changes. ### workbook:created Triggered when a new workbook is created ```typescript theme={null} { workbookId: string spaceId: string name: string namespace?: string sheetCount: number } ``` ### workbook:updated Triggered when a workbook is modified ```typescript theme={null} { workbookId: string spaceId: string changes: Array<{ type: "sheet_added" | "sheet_removed" | "config_updated" details: any }> } ``` ### workbook:deleted Triggered when a workbook is deleted ```typescript theme={null} { workbookId: string spaceId: string } ``` ### workbook:expired Triggered when a workbook expires ```typescript theme={null} { workbookId: string spaceId: string } ``` ## Space Events Space (Project) events are triggered for project lifecycle changes. ### space:created Triggered when a new project (space) is created ```typescript theme={null} { spaceId: string environmentId: string name: string appId?: string } ``` ### space:updated Triggered when a project is modified ```typescript theme={null} { spaceId: string environmentId: string changes: Array<{ field: string previousValue: any newValue: any }> } ``` ### space:deleted Triggered when a space is deleted ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string actorId: string } ``` ### space:expired Triggered when a space is expired ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string } ``` ### space:archived Triggered when a space is archived ```typescript theme={null} { accountId: string environmentId: string spaceId: string } ``` ### space:guestAdded Triggered when a guest is added ```typescript theme={null} { actorId: string accountId: string environmentId: string spaceId: string } ``` ### space:guestRemoved Triggered when a guest's access is revoked from a space ```typescript theme={null} { actorId: string accountId: string environmentId: string spaceId: string } ``` ### space:unarchived Triggered when a space is unarchived and restored to active status ```typescript theme={null} {} ``` ```typescript theme={null} { actorId: string accountId: string environmentId: string spaceId: string } ``` ## Environment Events Environment events are triggered for organization-level changes. ### environment:autobuild-created Triggered when an autobuild configuration is created for an environment ```typescript theme={null} {} ``` ```typescript theme={null} { environmentId: string accountId: string appId?: string actorId: string } ``` ### environment:created Triggered when a new environment is created ```typescript theme={null} { environmentId: string name: string slug: string isProd: boolean } ``` ### environment:updated Triggered when an environment is modified ```typescript theme={null} { environmentId: string changes: Array<{ field: string previousValue: any newValue: any }> } ``` ### environment:deleted Triggered when an environment is deleted ```typescript theme={null} { environmentId: string deletedAt: string deletedBy: string } ``` ## Action Events Action events are triggered when custom actions are created, updated, or deleted. ### action:created Triggered when a new custom action is created ```typescript theme={null} { actionId: string name: string label: string description?: string type: string } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId?: string sheetId?: string actorId: string } ``` ### action:updated Triggered when a custom action is updated ```typescript theme={null} { actionId: string name: string label: string description?: string type: string changes: Array<{ field: string previousValue: any newValue: any }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId?: string sheetId?: string actorId: string } ``` ### action:deleted Triggered when a custom action is deleted ```typescript theme={null} { actionId: string name: string } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId?: string sheetId?: string actorId: string } ``` ## Document Events Document events are triggered when documents are created, updated, or deleted within workbooks. ### document:created Triggered when a document is created on a workbook ```typescript theme={null} { actorId: string spaceId: string accountId: string documentId: string environmentId: string } ``` ### document:updated Triggered when a document is updated on a workbook ```typescript theme={null} { actorId: string spaceId: string accountId: string documentId: string environmentId: string } ``` ### document:deleted Triggered when a document is deleted on a workbook ```typescript theme={null} { actorId: string spaceId: string accountId: string documentId: string environmentId: string } ``` ## Commit Events Commit events are triggered when data changes are made to records. ### commit:created Triggered when a cell in a record is created or updated ```typescript theme={null} { sheetId: string workbookId: string versionId: string sheetSlug: string } ``` ### commit:updated Triggered when commit metadata or details are updated ```typescript theme={null} { sheetId: string workbookId: string commitId: string versionId: string changes: Array<{ field: string previousValue: any newValue: any }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string workbookId: string sheetId: string actorId: string } ``` ### commit:completed Triggered when a commit has completed (only when trackChanges is enabled) ```typescript theme={null} { sheetId: string workbookId: string versionId: string commitId: string } ``` ### layer:created Triggered when a new layer is created within a commit ```typescript theme={null} { sheetId: string workbookId: string layerId: string commitId: string } ``` ## Snapshot Events Snapshot events are triggered when snapshots of sheet data are created. ### snapshot:created Triggered when a snapshot is created of a sheet ```typescript theme={null} { snapshotId: string sheetId: string workbookId: string spaceId: string } ``` ## Agent Events Agent events are triggered when agents are created, updated, or deleted. ### agent:created Triggered when a new agent is deployed ```typescript theme={null} { agentId: string environmentId: string } ``` ### agent:updated Triggered when an agent is updated ```typescript theme={null} { agentId: string environmentId: string } ``` ### agent:deleted Triggered when an agent is deleted ```typescript theme={null} { agentId: string environmentId: string } ``` ## Secret Events Secret events are triggered when secrets are managed. ### secret:created Triggered when a new secret is created ```typescript theme={null} { secretId: string spaceId: string environmentId: string } ``` ### secret:updated Triggered when a secret is updated ```typescript theme={null} { secretId: string spaceId: string environmentId: string } ``` ### secret:deleted Triggered when a secret is deleted ```typescript theme={null} { secretId: string spaceId: string environmentId: string } ``` ## Data Clip Events Data clip events are triggered when data clips are managed. ### data-clip:collaborator-updated Triggered when collaborators are added or removed from a data clip ```typescript theme={null} { dataClipId: string collaborators: string[] changes: Array<{ action: "added" | "removed" userId: string }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string dataClipId: string } ``` ### data-clip:created Triggered when a new data clip is created ```typescript theme={null} { dataClipId: string accountId: string status: string } ``` ### data-clip:updated Triggered when a data clip's details are updated ```typescript theme={null} { dataClipId: string accountId: string status: string } ``` ### data-clip:deleted Triggered when a data clip is deleted ```typescript theme={null} { dataClipId: string accountId: string status: string } ``` ### data-clip:resolutions-created Triggered when new conflict resolutions are created for a data clip ```typescript theme={null} { dataClipId: string resolutions: Array<{ conflictId: string resolution: any resolvedBy: string }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string dataClipId: string } ``` ### data-clip:resolutions-refreshed Triggered when conflict resolutions are refreshed or recalculated for a data clip ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string dataClipId: string } ``` ### data-clip:resolutions-updated Triggered when existing conflict resolutions are updated for a data clip ```typescript theme={null} { dataClipId: string resolutions: Array<{ conflictId: string resolution: any resolvedBy: string updatedAt: string }> changes: Array<{ conflictId: string previousResolution: any newResolution: any }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string dataClipId: string } ``` ## Canvas Events Canvas events are triggered when canvases are created, updated, or deleted. ### canvas:created Triggered when a new canvas is created ```typescript theme={null} { // Full canvas object with all properties } ``` ```typescript theme={null} { canvasId: string spaceId: string environmentId: string accountId: string } ``` ### canvas:updated Triggered when a canvas is updated ```typescript theme={null} { // Full canvas object with all properties } ``` ```typescript theme={null} { canvasId: string spaceId: string environmentId: string accountId: string } ``` ### canvas:deleted Triggered when a canvas is deleted ```typescript theme={null} { // Full canvas object with all properties } ``` ```typescript theme={null} { canvasId: string spaceId: string environmentId: string accountId: string } ``` ## Canvas Area Events Canvas area events are triggered when canvas areas are created, updated, or deleted. ### canvas-area:created Triggered when a new canvas area is created ```typescript theme={null} { // Full canvas area object with all properties } ``` ```typescript theme={null} { canvasAreaId: string canvasId: string } ``` ### canvas-area:updated Triggered when a canvas area is updated ```typescript theme={null} { // Full canvas area object with all properties } ``` ```typescript theme={null} { canvasAreaId: string canvasId: string } ``` ### canvas-area:deleted Triggered when a canvas area is deleted ```typescript theme={null} { // Full canvas area object with all properties } ``` ```typescript theme={null} { canvasAreaId: string canvasId: string } ``` ## Thread Events Thread events are triggered when AI conversation threads are created, updated, or deleted. ### thread:created Triggered when a new AI conversation thread is created ```typescript theme={null} { threadId: string title?: string status: string createdAt: string } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string threadId: string actorId: string } ``` ### thread:updated Triggered when an AI conversation thread is updated ```typescript theme={null} { threadId: string title?: string status: string changes: Array<{ field: string previousValue: any newValue: any }> } ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string threadId: string actorId: string } ``` ### thread:deleted Triggered when an AI conversation thread is deleted ```typescript theme={null} {} ``` ```typescript theme={null} { accountId: string environmentId: string spaceId: string threadId: string actorId: string } ``` ## Cron Events \*\* Deployed Agents Required \*\* Cron events are only created for environments that have deployed agents subscribed to the specific cron topics. These events will not fire in localhost development environments unless you have deployed agents running in that environment. Cron events are system events triggered at scheduled intervals for automated processes. ### cron:5-minutes Triggered every 5 minutes for system maintenance and periodic tasks ```typescript theme={null} {} ``` ```typescript theme={null} { environmentId: string } ``` ### cron:hourly Triggered every hour for scheduled maintenance and cleanup tasks ```typescript theme={null} {} ``` ```typescript theme={null} { environmentId: string } ``` ### cron:daily Triggered once daily for daily maintenance and reporting tasks ```typescript theme={null} {} ``` ```typescript theme={null} { environmentId: string } ``` ### cron:weekly Triggered once weekly for weekly cleanup and archival tasks ```typescript theme={null} {} ``` ```typescript theme={null} { environmentId: string } ``` --- # Source: https://flatfile.com/docs/plugins/export-workbook.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Export Workbook Plugin > A server-side utility for Flatfile that allows users to export data from an entire Flatfile Workbook into a single, downloadable Microsoft Excel (.xlsx) file. The Export Workbook Plugin is a server-side utility for Flatfile that allows users to export data from an entire Flatfile Workbook into a single, downloadable Microsoft Excel (`.xlsx`) file. Its primary purpose is to provide a simple way to get data out of Flatfile in a widely-used format. The plugin is triggered by a user action, typically a "Download" button configured on the workbook. It then iterates through all the sheets in the workbook (unless some are excluded), fetches the records, and compiles them into a corresponding sheet in the generated Excel file. Use cases include: * Allowing end-users to download a copy of their cleaned and validated data after an import * Creating backups or snapshots of data within a Flatfile Space * Exporting data for use in other systems that accept Excel files * Providing a final report of all imported data, including any validation messages as comments in the Excel cells ## Installation ```bash npm theme={null} npm install @flatfile/plugin-export-workbook ``` ## Configuration & Parameters The plugin is configured by passing an options object to the `exportWorkbookPlugin` function. | Parameter | Type | Default | Description | | ----------------------- | --------------------------------------------------- | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `jobName` | `string` | `'workbook:downloadWorkbook'` | The name of the job operation that the plugin will listen for. This must match the `operation` name of an action configured on the workbook. | | `excludedSheets` | `string[]` | `undefined` | An array of sheet slugs to be excluded from the export. | | `excludeFields` | `string[]` | `undefined` | An array of field keys (column names) to be excluded from the export across all sheets. | | `excludeMessages` | `boolean` | `false` | If set to `true`, validation messages on records will not be included as comments in the cells of the exported Excel file. | | `recordFilter` | `'valid' \| 'error' \| 'all'` | `undefined` | Filters the records to be exported. Can be set to 'valid' to export only records without errors, or 'error' to export only records with errors. | | `includeRecordIds` | `boolean` | `false` | If set to `true`, a 'recordId' column containing the Flatfile internal record ID will be added as the first column in each sheet. | | `autoDownload` | `boolean` | `false` | If set to `true`, the exported file will be downloaded automatically in the user's browser upon job completion. If `false`, the user is directed to the "Files" page in the Flatfile Space to download it. | | `filename` | `string` | `undefined` | A custom filename for the exported file (without the `.xlsx` extension). If not provided, a filename is generated using the workbook name and a timestamp. | | `debug` | `boolean` | `false` | If set to `true`, the plugin will output verbose logging to the console, which is useful for development and troubleshooting. | | `sheetOptions` | `Record` | `undefined` | An object that maps a sheet slug to sheet-specific export options. | | `columnNameTransformer` | `(columnName: string, sheetSlug: string) => string` | `undefined` | A callback function to dynamically transform column names before they are written to the Excel file. | ### Sheet Options The `sheetOptions` parameter allows you to configure specific sheets with the following options: | Option | Type | Description | | ------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | | `skipColumnHeaders` | `boolean` | If `true`, the header row with column names is omitted for that sheet. | | `origin` | `number \| {row: number, column: number}` | Sets the starting cell for the data in the sheet. A number sets the starting row, while an object can set both the starting row and column. | ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener) { // Use the plugin with default settings listener.use(exportWorkbookPlugin()); } /* // In your workbook.config.json, you need an action to trigger the plugin: "actions": [ { "operation": "downloadWorkbook", "mode": "foreground", "label": "Download Excel Workbook", "description": "Downloads all data in an Excel file.", "primary": true } ] */ ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener: FlatfileListener) { // Use the plugin with default settings listener.use(exportWorkbookPlugin()); } /* // In your workbook.config.json, you need an action to trigger the plugin: "actions": [ { "operation": "downloadWorkbook", "mode": "foreground", "label": "Download Excel Workbook", "description": "Downloads all data in an Excel file.", "primary": true } ] */ ``` ### Configuration Example ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener) { listener.use( exportWorkbookPlugin({ // Only export records that have passed validation recordFilter: 'valid', // Exclude the 'internal_notes' field from all sheets excludeFields: ['internal_notes'], // Exclude the 'raw_data' sheet entirely excludedSheets: ['raw_data'], // Automatically start the download for the user autoDownload: true, // Enable verbose logging for troubleshooting debug: true, // Add custom options for the 'contacts' sheet sheetOptions: { contacts: { // Omit the header row for the 'contacts' sheet skipColumnHeaders: true, }, }, }) ); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener: FlatfileListener) { listener.use( exportWorkbookPlugin({ // Only export records that have passed validation recordFilter: 'valid', // Exclude the 'internal_notes' field from all sheets excludeFields: ['internal_notes'], // Exclude the 'raw_data' sheet entirely excludedSheets: ['raw_data'], // Automatically start the download for the user autoDownload: true, // Enable verbose logging for troubleshooting debug: true, // Add custom options for the 'contacts' sheet sheetOptions: { contacts: { // Omit the header row for the 'contacts' sheet skipColumnHeaders: true, }, }, }) ); } ``` ### Advanced Usage with Column Transformer ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener) { listener.use( exportWorkbookPlugin({ // Use a custom job name jobName: 'export:customExcel', // Transform column names to be more user-friendly columnNameTransformer: (columnName, sheetSlug) => { // Example: transform 'firstName' to 'First Name' const friendlyName = columnName.replace(/([A-Z])/g, ' $1').replace(/^./, (str) => str.toUpperCase()); // Add a prefix for a specific sheet if (sheetSlug === 'users') { return `User - ${friendlyName}`; } return friendlyName; }, }) ); } /* // In your workbook.config.json, the action must match the custom jobName: "actions": [ { "operation": "export:customExcel", "mode": "foreground", "label": "Download Custom Excel Report", "primary": true } ] */ ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { exportWorkbookPlugin } from "@flatfile/plugin-export-workbook"; export default function (listener: FlatfileListener) { listener.use( exportWorkbookPlugin({ // Use a custom job name jobName: 'export:customExcel', // Transform column names to be more user-friendly columnNameTransformer: (columnName: string, sheetSlug: string) => { // Example: transform 'firstName' to 'First Name' const friendlyName = columnName.replace(/([A-Z])/g, ' $1').replace(/^./, (str) => str.toUpperCase()); // Add a prefix for a specific sheet if (sheetSlug === 'users') { return `User - ${friendlyName}`; } return friendlyName; }, }) ); } /* // In your workbook.config.json, the action must match the custom jobName: "actions": [ { "operation": "export:customExcel", "mode": "foreground", "label": "Download Custom Excel Report", "primary": true } ] */ ``` ## Troubleshooting ### Enable Debug Mode The most important troubleshooting tool is the `debug: true` option. When enabled, the plugin prints detailed logs to the console, including which sheets are being processed, which are skipped, and the status of file writing and uploading. ### Check Action Operation If the plugin does not trigger when the action button is clicked, ensure the `operation` value in your `workbook.config.json` action exactly matches the `jobName` used to configure the plugin. ### No Data Exported If the exported file is empty or missing sheets, check if a `recordFilter` is unintentionally filtering out all records or if `excludedSheets` is misconfigured. The `debug` logs will show if sheets are being skipped. If all sheets are empty, the plugin will throw an error: `No data to write to Excel file.` ## Notes ### Server-Side Execution This plugin must be deployed in a server-side listener environment, not in the browser. ### Action Configuration For the plugin to be triggered, a corresponding `action` must be configured on the `Workbook` in your `workbook.config.json`. The `operation` of this action must match the `jobName` option of the plugin (which defaults to `workbook:downloadWorkbook`). ### File System Access The plugin temporarily writes the `.xlsx` file to the `/tmp` directory of the execution environment, which is standard for serverless functions. ### Sheet Name Sanitization Excel sheet names have limitations (e.g., max 31 characters, no invalid characters like `\ / ? * [ ]`). The plugin automatically sanitizes sheet names from your workbook to comply with these rules. If a name becomes empty after sanitization, it will be replaced with a default like `Sheet1`, `Sheet2`, etc. ### Default Behavior * By default, all records from all sheets are exported * Validation messages are included as comments in the Excel cells * The exported file is made available in the "Files" page rather than auto-downloading * Column headers are included in the export * A filename is auto-generated using the workbook name and timestamp if not specified ### Error Handling The plugin wraps its entire logic in a `try...catch` block. If any critical step fails (fetching records, writing the file to disk, uploading the file to Flatfile), it logs the error and throws a new `Error`. This causes the associated job in Flatfile to fail and display the error message to the user, providing feedback on what went wrong. --- # Source: https://flatfile.com/docs/reference/ffql.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Flatfile Query Language > Learn how to filter data in Sheets with FFQL. FFQL (Flatfile Query Language) is Flatfile's custom query language used for filtering data in [Sheets](/core-concepts/blueprints). It's logical, composable, and human-readable. For example, to find records where the first name is "Bender": ``` first_name eq Bender ``` ## Syntax The basic syntax of FFQL is: ``` [field name] ``` ### Field Name `field name` is optional and excluding a field name will search across all fields. For example: `eq "Planet Express"` will search across all fields for that value. `field name` can be the field key or the field label. Labels or values with spaces should be wrapped in quotes. For example: `name eq "Bender Rodriguez"`, or `"First Name" eq Bender`. ### Operators FFQL operators are: * `eq` - Equals (exact match) * `ne` - Not Equal * `lt` - Less Than * `gt` - Greater Than * `lte` - Less Than or Equal To * `gte` - Greater Than or Equal To * `like` - (Case Sensitive) Like * `ilike` - (Case Insensitive) Like * `contains` - Contains (for columns of type `string-list` and `enum-list`) Both `like` and `ilike` support the following wildcards: * `%` - Matches any number of characters * `_` - Matches a single character So, for instance, `like "Bender%"` would match "Bender Rodriguez" and "Bender Bending Rodriguez". ### Logical Operators FFQL supports two logical operators: * `and` - Returns rows meeting both conditions * `or` - Returns rows meeting either conditions (inclusive) ### The `is` Operator You can query for message statuses by using the `is` operator. For example: `is error` returns all the rows in an `error` state. `is valid` returns all the rows in a `valid` state. `first_name is error` returns the rows where First Name is in an `error` state. ### Escaping Quotes and Backslashes When you need to include quotes or backslashes within quoted values, you can escape them using a backslash (`\`). Here are examples of how escaping works: | Query | Value | | ------------------ | --------------- | | `"hello \" world"` | `hello " world` | | `'hello \' world'` | `hello ' world` | | `"hello world \\"` | `hello world \` | | `"hello \ world"` | `hello \ world` | | `"hello \\ world"` | `hello \ world` | For example, to search for a field containing a quote: ``` first_name eq "John \"Johnny\" Doe" ``` This would match a first name value of: `John "Johnny" Doe` *** ## Constructing Queries Complex queries are possible using a combination of operators: ``` ( email like "@gmail.com" and ( "Subscription Status" eq "On-Hold" or "Subscription Status" eq "Pending" ) and login_attempts gte 5 ) or is warning ``` This query would return all the rows that: 1. Have a Gmail email address, 2. Have a Subscription Status of "On-Hold" or "Pending", 3. And have more than 5 login attempts. It will also include any rows that have a "Warning" message status. *** ## Usage ### Via search bar From the search bar in a Workbook, prepend **filter:** to your FFQL query. **type in search bar** ``` filter: first_name eq Bender and last_name eq Rodriguez ``` ### Via API FFQL queries can be passed to any [REST API](https://reference.flatfile.com/overview/welcome) endpoint that supports the `q` parameter. Here's an example **cURL** request using the `sheets//records` endpoint: **Shell / cURL** ```bash theme={null} curl --location 'https://platform.flatfile.com/api/sheets/us_sh_12345678/records?q="Subscription Status" eq "On-Hold" or "Subscription Status" eq "Pending"' \ --header 'Accept: */*' \ --header 'Authorization: Bearer ' ``` Make sure to [encode](https://en.wikipedia.org/wiki/URL_encoding) percent characters if you use them. --- # Source: https://flatfile.com/docs/core-concepts/fields.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Fields > Blueprint definitions that define the structure and validation rules for your data in Flatfile ## What are Fields? A Field in Flatfile is like a column in a spreadsheet — it defines the data type, format, and any constraints for a single piece of data in your import. Fields are defined inside a [Blueprint](/core-concepts/blueprints), which acts as the master schema for your import setup. The hierarchy works like this: * A Blueprint defines the structure of one or more Workbooks. * Each Workbook contains Sheets (like tabs in a spreadsheet). * Each Sheet contains Fields (columns), which describe the individual data points to be collected. When you configure a Field, you’re telling Flatfile what kind of data to expect in that column—whether it’s text, a number, a date, or another supported type. You can also apply [constraints](#field-constraints) like "required," "unique," or "computed" to control how data is handled during import. Fields play a key role in ensuring data structure and quality from the moment users start importing. ## Basic Blueprint Structure * A [Blueprint](/core-concepts/blueprints) defines the data structure for any number of [Spaces](/core-concepts/spaces) * A [Space](/core-concepts/blueprints) may contain many [Workbooks](/core-concepts/workbooks) and many [Documents](/core-concepts/documents) * A [Document](/core-concepts/documents) contains static documentation and may contain many [Document-level Actions](/guides/using-actions#document-actions) * A [Workbook](/core-concepts/workbooks) may contain many [Sheets](/core-concepts/sheets) and many [Workbook-level Actions](/guides/using-actions#workbook-actions) * A [Sheet](/core-concepts/sheets) may contain many [Fields](/core-concepts/fields) and many [Sheet-level Actions](/guides/using-actions#sheet-actions) * A [Field](/core-concepts/fields) defines a single column of data, and may contain many [Field-level Actions](/guides/using-actions#field-actions) ## Basic Field Configuration The following examples demonstrate the configuration of isolated fields, which are intended to be used in the context of a [Blueprint](/core-concepts/blueprints) configuration under a [Sheet](/core-concepts/sheets) definition. ### Simple Field Definition This example configures a single `string` Field that is `required` and another a `number` Field with a `decimalPlaces` config. ```javascript theme={null} // Basic string field const nameField = { key: "firstName", type: "string", label: "First Name", description: "The customer's first name", required: true, }; // Number field with config const ageField = { key: "age", type: "number", label: "Age", description: "Customer age in years", config: { decimalPlaces: 2, }, }; ``` ### Field with Multiple Options This example configures a single `enum` field with a `status` key, `Customer Status` label, and `Current status of the customer` description. It also defines four options for the field: `active`, `inactive`, `pending`, and `suspended`. ```javascript theme={null} const statusField = { key: "status", type: "enum", label: "Customer Status", description: "Current status of the customer", options: [ { value: "active", label: "Active" }, { value: "inactive", label: "Inactive" }, { value: "pending", label: "Pending" }, { value: "suspended", label: "Suspended" }, ], }; ``` ## Field Types Flatfile supports 9 field types for defining data structure: **Basic Types:** * `string` - Basic text data * `number` - Integer or floating point numbers * `boolean` - True/false values * `date` - GMT date values in YYYY-MM-DD format **Single Selection:** * `enum` - Single selection from predefined options **Multiple Value Types:** * `string-list` - Array of string values * `enum-list` - Multiple selections from predefined options **Reference Types:** * `reference` - Single reference to another sheet * `reference-list` - Multiple references to another sheet ### `string` A property that should be stored and read as a basic string. ```javascript theme={null} { "key": "productCode", "label": "Product Code", "type": "string", "appearance": { "size": "m" } } ``` **Note:** String fields don't have type-specific config options, but support appearance settings. ### `string-list` Stores an array of string values. Useful for fields that contain multiple text entries. ```javascript theme={null} { "key": "tags", "label": "Product Tags", "type": "string-list" } ``` ### `number` A property that should be stored and read as either an integer or floating point number. **config.decimalPlaces** The number of decimal places to preserve accuracy to. ```javascript theme={null} { "key": "price", "label": "Retail Price", "type": "number", "config": { "decimalPlaces": 2 } } ``` ### `enum` Defines an enumerated list of options for the user to select from (single selection). The maximum number of options for this list is `100`. For multiple selections, use [`enum-list`](#enum-list). **config.allowCustom** Allow users to create new options for this field. When enabled, users will be able to import their custom value to the review table, but it will not be considered a valid enum option. **config.sortBy** The field to sort the options by (`label`, `value`, `ordinal`). **config.options** An array of valid options the user can select from: ```json theme={null} { "key": "status", "label": "Status", "type": "enum", "config": { "options": [ { "value": "active", "label": "Active" }, { "value": "inactive", "label": "Disabled" } ] } } ``` ### `enum-list` Allows multiple selections from a predefined list of options. Values are stored as an array. Use this instead of `enum` when users need to select multiple options. **config.allowCustom** Allow users to create new options for this field. When enabled, users will be able to import their custom value to the review table, but it will not be considered a valid enum option. **config.sortBy** Sort options by: `label`, `value`, or `ordinal`. **config.options** Array of option objects: ```json theme={null} { "key": "categories", "label": "Product Categories", "type": "enum-list", "config": { "allowCustom": false, "sortBy": "label", "options": [ { "value": "electronics", "label": "Electronics" }, { "value": "clothing", "label": "Clothing" }, { "value": "books", "label": "Books" } ] } } ``` ### `boolean` A `true` or `false` value type. Usually displayed as a checkbox. **config.allowIndeterminate** Allow a neither true or false state to be stored as `null`. ```json theme={null} { "key": "is_active", "label": "Active", "type": "boolean", "config": { "allowIndeterminate": true } } ``` ### `date` Store a field as a GMT date. Data hooks must convert this value into a `YYYY-MM-DD` format in order for it to be considered a valid value. ```json theme={null} { "key": "start_date", "label": "Start Date", "type": "date" } ``` ### `reference` Defines a singular one-to-one reference to a field another sheet. **config.ref** The sheet slug of the referenced field. Must be in the same workbook. **config.key** The key of the property to use as the reference key. **config.filter** Optional filter to narrow the set of records in the reference sheet used as valid values. When provided, only records where the `refField` value matches the current record's `recordField` value will be available as options. ```json theme={null} { "key": "author", "type": "reference", "label": "Authors", "config": { "ref": "authors", "key": "name" } } ``` #### Reference Field Filtering Reference fields can be filtered to show only relevant options based on the value of another field in the same record. This enables cascading dropdowns and dependent field relationships. **Dynamic Enums** This feature, along with the `ENUM_REFERENCE` [sheet treatment](/core-concepts/sheets#reference-sheets), may be collectively referred to as **Dynamic Enums**. By combining these two features, you can create a drop-down list for any cell in your sheet that's dynamically controlled by the value of another field in the same record – and to the end-user, it will just work like a dynamically-configured `enum` field. **Filter Configuration** The `filter` property accepts a `ReferenceFilter` object with two required properties: | Property | Type | Description | | ------------- | -------- | ------------------------------------------------------ | | `refField` | `string` | The field key in the referenced sheet to filter with | | `recordField` | `string` | The field key in the current record used for filtering | **How Filtering Works** When a filter is applied: 1. The system looks at the value in the `recordField` of the current record 2. It then filters the referenced sheet to only show records where `refField` matches that value 3. Only the filtered records become available as options in the reference field **Example: Country and State Cascading Dropdown** Consider a scenario where you want state options to be filtered based on the selected country. This example also demonstrates the use of the `ENUM_REFERENCE` [sheet treatment](/core-concepts/sheets#reference-sheets) to hide the reference sheet from the UI. You may wish to disable this treatment for testing purposes. ```json theme={null} { "sheets": [ { "name": "Reference Data", "slug": "ref-data", "treatments": ["ENUM_REFERENCE"], "fields": [ { "key": "country-name", "label": "Country", "type": "string" }, { "key": "state-name", "label": "State/Province", "type": "string" } ] }, { "name": "Addresses", "slug": "addresses", "fields": [ { "key": "country", "label": "Country", "type": "reference", "config": { "ref": "ref-data", "key": "country-name" } }, { "key": "state", "label": "State/Province", "type": "reference", "config": { "ref": "ref-data", "key": "state-name", "filter": { "refField": "country-name", "recordField": "country" } } } ] } ] } ``` **Reference Data Sheet**: | country-name | state-name | | ------------ | ---------- | | USA | California | | USA | New York | | USA | Texas | | Canada | Ontario | | Canada | Quebec | **Behavior** * When **USA** is selected in the country field, the state dropdown will only show **California**, **New York**, and **Texas** * When **Canada** is selected in the country field, the state dropdown will only show **Ontario** and **Quebec** The following diagram illustrates how reference field filtering works: ```mermaid theme={null} graph TD A[User selects **USA**] --> B[System filters references] B --> C[Show only States where **country-name** = **USA**] C --> D[**California**, **New York**, and **Texas** are available] E[User selects **Canada**] --> F[System filters references] F --> G[Show only States where **country-name** = **Canada**] G --> H[**Ontario** and **Quebec** are available] style A stroke:#4CD95E, stroke-width:2px style E stroke:#4CD95E, stroke-width:2px style D stroke:#D9804E, stroke-width:2px style H stroke:#D9804E, stroke-width:2px ``` And this is how it looks in the UI: Reference Field Filtering Reference Field Filtering Reference Field Filtering ### `reference-list` Defines multiple references to records in another sheet within the same workbook. **config.ref** The sheet slug of the referenced field. Must be in the same workbook. **config.key** The key of the property to use as the reference key. **config.filter** Optional filter to narrow the set of records in the reference sheet used as valid values. ```json theme={null} { "key": "authors", "type": "reference-list", "label": "Book Authors", "config": { "ref": "authors", "key": "name" } } ``` #### Reference List Filtering The `filter` property on `reference-list` fields works identically to `reference` [field filters](#reference-field-filtering), but allows for multiple selections. ```json theme={null} { "key": "categories", "label": "Product Categories", "type": "reference-list", "config": { "ref": "category-data", "key": "category-name", "filter": { "refField": "department", "recordField": "product-department" } } } ``` **Multi-Level Cascading Example** You can create complex, multi-level cascading dropdowns by chaining filtered reference fields together. This example shows a product taxonomy where department selection filters available categories, and category selection filters available subcategories: ```json theme={null} { "sheets": [ { "name": "Product Taxonomy", "slug": "taxonomy", "fields": [ { "key": "department", "label": "Department", "type": "string" }, { "key": "category", "label": "Category", "type": "string" }, { "key": "subcategory", "label": "Subcategory", "type": "string" } ] }, { "name": "Products", "slug": "products", "fields": [ { "key": "department", "label": "Department", "type": "reference", "config": { "ref": "taxonomy", "key": "department" } }, { "key": "category", "label": "Category", "type": "reference", "config": { "ref": "taxonomy", "key": "category", "filter": { "refField": "department", "recordField": "department" } } }, { "key": "subcategory", "label": "Subcategory", "type": "reference-list", "config": { "ref": "taxonomy", "key": "subcategory", "filter": { "refField": "category", "recordField": "category" } } } ] } ] } ``` **Product Taxonomy Sheet**: | department | category | subcategory | | ----------- | ----------- | ------------- | | Electronics | Computers | Laptops | | Electronics | Computers | Desktops | | Electronics | Computers | Tablets | | Electronics | Audio | Headphones | | Electronics | Audio | Speakers | | Clothing | Men's | Shirts | | Clothing | Men's | Pants | | Clothing | Women's | Dresses | | Clothing | Women's | Shoes | | Books | Fiction | Novels | | Books | Fiction | Short Stories | | Books | Non-Fiction | Biography | | Books | Non-Fiction | History | **Behavior** This creates a three-level cascade: Department → Category → Subcategory, where users can select multiple subcategories from the filtered options. * When **Electronics** is selected in the department field, the category dropdown will only show **Computers** and **Audio** * When **Computers** is selected, the subcategory dropdown will show **Laptops**, **Desktops**, and **Tablets**. Users may then select *multiple* subcategories from the filtered options * When **Clothing** is selected, the category dropdown will only show **Men's** and **Women's** * When **Men's** is selected, the subcategory dropdown will show **Shirts** and **Pants**. Users may then select *multiple* subcategories from the filtered options ## Field Constraints Field constraints are system-level validation rules that enforce data integrity and business logic on data in individual fields. ### required Ensures that a field must have a non-null value. Empty cells are considered null values and will fail this constraint. ```json theme={null} { "key": "email", "type": "string", "label": "Email Address", "constraints": [{ "type": "required" }] } ``` By default, if a required constraint fails, an error will be added to the field with the message "\ is required". You can override the message and/or the error level of the message by supplying a `config` object with the constraint. For example: ```json theme={null} { "key": "email", "type": "string", "label": "Email Address", "constraints": [{ "type": "required", "config": { "message": "This record is missing an email address", "level": "warn" } }] } ``` ### unique Ensures that field values appear only once across all records in the sheet. Note that null values can appear multiple times as they don't count toward uniqueness validation. ```json theme={null} { "key": "employeeId", "type": "string", "label": "Employee ID", "constraints": [{ "type": "unique" }] } ``` By default, if a required constraint fails, an error will be added to the field with the message "Value is not unique". You can override the message and/or the error level of the message by supplying a `config` object with the constraint -- see the example with the required constraint above. ### computed Marks a field as computed, hiding it from the mapping process. Users will not be able to map imported data to fields with this constraint. ```json theme={null} { "key": "calculatedField", "type": "string", "label": "Calculated Value", "constraints": [{ "type": "computed" }] } ``` Sheet-level constraints that apply to multiple fields are covered in [Sheet Constraints](/core-concepts/sheets#sheet-constraints). ### Field-level access **`readonly`** On a field level you can restrict a user's interaction with the data to `readonly`. This feature is useful if you're inviting others to view uploaded data, but do not want to allow them to edit that field. ```json theme={null} { "fields": [ { "key": "salary", "type": "number", "readonly": true } ] } ``` ## Field Options Configurable properties for a Field that define its structure and behavior: | Option | Type | Required | Default | Description | | --------------- | ------- | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | | **key** | string | ✓ | | The system name of this field. Primarily informs JSON and egress structures | | **type** | string | ✓ | | One of `string`, `number`, `boolean`, `date`, `enum`, `reference`, `string-list`, `enum-list`, `reference-list`. Defines the handling of this property | | **label** | string | | key | A user-facing descriptive label designed to be displayed in the UI such as a table header | | **description** | string | | | A long form description of the property intended to be displayed to an end user (supports Markdown) | | **constraints** | array | | \[] | An array of system level validation rules (max 10). [Learn more about field constraints](#field-constraints) | | **config** | object | | {} | Configuration relevant to the type of column. See type documentation below | | **readonly** | boolean | | false | Prevents user input on this field | | **appearance** | object | | {} | UI appearance settings. Currently supports `size` property with values: `"xs"`, `"s"`, `"m"`, `"l"`, `"xl"` | | **actions** | array | | \[] | User actions available for this field. See [Field Actions](/guides/using-actions#field-actions) for detailed configuration | \| **alternativeNames** | array | | \[] | Alternative field names for mapping assistance | \| **metadata** | object | | {} | Arbitrary object of values to pass through to hooks and egress | --- # Source: https://flatfile.com/docs/getting-started/quickstart/first-project.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Creating Your First Project > Running through the import flow with your Flatfile App If you haven't created an app yet, start with the [Building Your First App with AutoBuild guide](/getting-started/quickstart/autobuild). ## Understanding Projects With your app created, it's time to create your first data import project! A project is an instance of your app - think of your app as a template for onboarding customers, and you'll create a project for each customer you onboard. Each project gives you an isolated workspace with its own database, permissions, and workflow. ## Creating a New Project To create a project, click the create button in the upper right corner of the app page in your dashboard. The text in this button will depend on how you've named your app's entity. The AutoBuild agent will create a new space based on your app's template. You'll be taken to your new space, ready to receive data. If you'd like, you can manually import data using Flatfile's intuitive spreadsheet interface. ## Uploading and Mapping Data Most users will start by uploading a data file. You can drag and drop a file from your computer directly onto the sheet interface. The file will automatically be extracted and you'll be taken to the mapping interface. Here you can map columns from your uploaded file to fields in your source sheet. With your mappings in place, click "Continue." The data from your file will be mapped into your sheet. ## Learn More Now that you've created your first project, explore these helpful resources: * [Core Concepts](/core-concepts/overview) - Understand Flatfile's fundamental building blocks * [Handling Data](/guides/using-record-hooks) - Advanced data transformation and validation techniques * [Using Actions](/guides/using-actions) - Create custom workflows and automations * [API Reference](/api-reference) - Complete technical documentation ## Working with Your Data From here, any transformations and validations you've defined will run automatically. You can resolve any data issues in your sheet using AI transforms, find and replace, custom actions, or by simply editing the data manually. You can collaborate with others on data issues using comments and data clips. ## Exporting Your Data When you're ready to move your perfected data out of Flatfile, you've got options! You can download your data directly from the sheets interface. You can retrieve the sheet data via API. Or, you can create a custom action to ship the data directly into your system. --- # Source: https://flatfile.com/docs/reference/for-llms.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # For LLMs > AI-optimized documentation formats and tools for better LLM integration The Flatfile documentation is optimized for use with Large Language Models (LLMs) and AI tools. We provide several features to help you get faster, more accurate responses when using our documentation as context. ## Contextual Menu We provide a contextual menu on every documentation page with quick access to AI-optimized content and direct integrations with popular AI tools: Contextual menu showing AI integration options Contextual menu showing AI integration options * **Copy page** - Copies the current page as Markdown for pasting as context into AI tools * **View as Markdown** - Opens the current page in a clean Markdown format * **Open in ChatGPT** - Creates a ChatGPT conversation with the current page as context * **Open in Claude** - Creates a Claude conversation with the current page as context Access this menu by right-clicking on any page or using the contextual menu button (when available). ## LLM-Optimized File Formats ### /llms.txt The `/llms.txt` file follows the [industry standard](https://llmstxt.org) that helps general-purpose LLMs index documentation more efficiently, similar to how a sitemap helps search engines. **What it contains:** * Complete list of all available pages in our documentation * Page titles and URLs for easy navigation * Structured format that AI tools can quickly parse **How to use:** * Access at: [https://flatfile.com/docs/llms.txt](https://flatfile.com/docs/llms.txt) * Reference this file when asking AI tools to understand our documentation structure * Use it to help LLMs find relevant content for specific topics ### /llms-full.txt The `/llms-full.txt` file combines our entire documentation site into a single file, optimized for use as comprehensive context in AI tools. **What it contains:** * Full text content of all documentation pages * Structured with clear page boundaries and headers * Optimized formatting for LLM consumption **How to use:** * Access at: [https://flatfile.com/docs/llms-full.txt](https://flatfile.com/docs/llms-full.txt) * Download and use as context for comprehensive questions about Flatfile * Ideal for complex queries that might span multiple documentation areas **Note:** This file is large and may consume significant tokens. You might want to use `/llms.txt` first to identify specific pages, then use individual page URLs or the contextual menu for targeted questions. ## Markdown Versions of Pages All documentation pages are available in Markdown format, which provides structured text that AI tools can process more efficiently than HTML. ### .md Extension Add `.md` to any page's URL to view the Markdown version: ``` https://flatfile.com/docs/getting-started/welcome.md https://flatfile.com/docs/embedding/overview.md https://flatfile.com/docs/core-concepts/workbooks.md ``` ## Best Practices for AI Tool Usage ### For Specific Questions 1. **Start with targeted pages** - Use the contextual menu or `.md` extension for specific topics 2. **Reference multiple related pages** - Copy 2-3 relevant pages for comprehensive context 3. **Include the page URL** - Help the AI tool understand the source and context ### For Comprehensive Questions 1. **Use `/llms.txt` first** - Help the AI understand our documentation structure 2. **Follow up with specific pages** - Use targeted content based on the structure overview 3. **Consider `/llms-full.txt`** - For complex questions spanning multiple areas (token usage permitting) ### Example Prompts **For specific integration help:** ``` I'm trying to embed Flatfile in my React app. Here's the relevant documentation: [paste content from /embedding/react.md] How do I configure it for my use case where... ``` **For comprehensive understanding:** ``` I need to understand Flatfile's architecture. Here's their documentation structure: [paste content from /llms.txt] Can you explain how Environments, Apps, and Spaces work together? ``` ## Technical Implementation These AI-optimization features are built into our documentation platform and automatically maintained: * **Auto-generated** - Files are updated automatically when documentation changes * **Optimized formatting** - Content is structured for optimal LLM processing * **Consistent structure** - All pages follow the same format for predictable parsing ## Feedback These AI optimization features are continuously improved based on usage patterns and feedback. If you have suggestions for better LLM integration or notice issues with the generated formats, please [join our Slack community](https://flatfile.com/join-slack/) and share your thoughts in the #docs channel. --- # Source: https://flatfile.com/docs/plugins/foreign-db-extractor.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Foreign Database Extractor for Microsoft SQL Server > Automatically extract data from Microsoft SQL Server backup files (.bak) into Flatfile Workbooks for easy data review and processing. This plugin automates the process of extracting data from a Microsoft SQL Server (MSSQL) database backup file (`.bak`). When a user uploads a `.bak` file to a Flatfile Space, this plugin triggers a multi-step process. First, it uploads the backup file to a secure storage location. Then, it restores the backup to a Flatfile-hosted MSSQL database instance. After the database is restored and available, the plugin inspects its schema to identify all tables. Finally, it creates a read-only Flatfile Workbook where each database table is represented as a separate Sheet. This allows users to view and interact with data from large MSSQL databases directly within the Flatfile UI without needing to manually export data to CSV or Excel first. It is ideal for scenarios where the source of truth is an MSSQL database and users need to bring that data into the Flatfile ecosystem for review or processing. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-foreign-db-extractor ``` ## Configuration & Parameters This plugin does not have any user-configurable options that are passed during initialization. The plugin operates with a default, built-in configuration that automatically listens for `file:created` events. If the uploaded file has a `.bak` extension and is not for export, it initiates a job named 'extract-foreign-mssql-db'. This job handles the entire workflow: creating a workbook, uploading the file to S3, restoring the database, polling for status, generating sheets from the database tables, and linking everything together. The process is entirely self-contained. ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import { foreignDBExtractor } from '@flatfile/plugin-foreign-db-extractor'; export default function (listener) { // Simply use the plugin listener.use(foreignDBExtractor()); // Add other listeners as needed } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from '@flatfile/listener'; import { foreignDBExtractor } from '@flatfile/plugin-foreign-db-extractor'; export default function (listener: FlatfileListener) { // Simply use the plugin listener.use(foreignDBExtractor()); // Add other listeners as needed } ``` ### Complete Setup Example ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { foreignDBExtractor } from '@flatfile/plugin-foreign-db-extractor'; const listener = new FlatfileListener(); // Register the plugin with the listener instance listener.use(foreignDBExtractor()); // The listener is now configured to handle .bak file extractions export default listener; ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { foreignDBExtractor } from '@flatfile/plugin-foreign-db-extractor'; const listener = new FlatfileListener(); // Register the plugin with the listener instance listener.use(foreignDBExtractor()); // The listener is now configured to handle .bak file extractions export default listener; ``` ## API Reference ### foreignDBExtractor() A factory function that returns a pre-configured plugin for a Flatfile Listener. The returned plugin contains all the necessary logic to listen for `.bak` file uploads and manage the extraction process. This includes creating a corresponding job, restoring the database on a Flatfile-managed service, polling for its availability, generating Sheets from the database tables, and updating the Flatfile Workbook and File entities with the results. **Parameters:** * None **Return Value:** * `(listener: FlatfileListener) => void` - A function that accepts a FlatfileListener instance and registers the necessary event handlers **Error Handling:** The plugin has built-in error handling. If any step in the extraction job fails (e.g., database restore fails, polling times out), the `try...catch` block within the `job:ready` listener will catch the exception. It then updates the associated File status to 'failed' and fails the Job with a descriptive error message, making the failure visible in the Flatfile UI. ```javascript JavaScript theme={null} // Error handling is automatic - failures will be visible in the Flatfile UI try { listener.use(foreignDBExtractor()); } catch (error) { console.error('Plugin registration failed:', error); } ``` ```typescript TypeScript theme={null} // Error handling is automatic - failures will be visible in the Flatfile UI try { listener.use(foreignDBExtractor()); } catch (error) { console.error('Plugin registration failed:', error); } ``` ## Troubleshooting To troubleshoot issues, check the status and outcome message of the 'extract-foreign-mssql-db' job in the Flatfile dashboard. The error message from the failed step (e.g., "Database restore failed", "Failed to retrieve user credentials", or an error from the API) will be available in the job's `info` field. Common causes of failure could be a corrupted `.bak` file or the database restore process exceeding the built-in timeout. ## Notes ### Requirements * The `@flatfile/plugin-foreign-db-extractor` and `@flatfile/listener` packages must be installed * This feature must be enabled for your Flatfile account. Please contact Flatfile support to get access * The environment where the listener runs must have `AGENT_INTERNAL_URL` and `FLATFILE_BEARER_TOKEN` environment variables set, which are typically provided by the Flatfile platform ### Limitations * The plugin only activates for files with a `.bak` extension. Other file types are ignored * The created Workbook and Sheets are read-only representations of the restored database * The database restore process has a polling timeout of 3 minutes. If the restore takes longer, the job will fail * The process to retrieve database user credentials has a polling timeout of 50 seconds (10 attempts with a 5-second delay) ### Error Handling Patterns The primary error handling is centralized in the `job:ready` listener. It uses a `try...catch` block to wrap the entire extraction workflow. Upon catching an error, it uses the Flatfile API to update the status of the associated file and job to reflect the failure: * `api.files.update(fileId, { status: 'failed' })` * `api.jobs.fail(jobId, { info: e.message })` This ensures that failures are clearly communicated to the user through the Flatfile UI. Internal helper functions throw standard `Error` objects with specific messages that are propagated up to this central handler. --- # Source: https://flatfile.com/docs/plugins/geocode.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Geocode Address Data > Automatically enrich location data using the Google Maps Geocoding API to convert addresses into geographic coordinates and vice-versa. The Geocode plugin for Flatfile automatically enriches location data using the Google Maps Geocoding API. Its primary purpose is to convert addresses into geographic coordinates (latitude and longitude) and vice-versa. Use cases include: * **Forward Geocoding**: Automatically populating latitude and longitude fields from a given address field * **Reverse Geocoding**: Automatically populating a formatted address from given latitude and longitude fields * **Data Enrichment**: Extract and add supplementary location data to records, such as country and postal code The plugin operates during the data commit phase (`commit:created`), processing records in bulk before they are finalized. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-enrich-geocode ``` ## Configuration & Parameters The plugin is configured by passing a configuration object to the `enrichGeocode` function. | Parameter | Type | Default | Description | | ---------------- | ------- | ------------- | ------------------------------------------------------------------------- | | `sheetSlug` | string | `"addresses"` | The slug of the sheet you want the plugin to process | | `addressField` | string | `"address"` | The field key/name in your sheet that contains the address to be geocoded | | `latitudeField` | string | `"latitude"` | The field key/name in your sheet for the latitude value | | `longitudeField` | string | `"longitude"` | The field key/name in your sheet for the longitude value | | `autoGeocode` | boolean | `true` | A flag to enable or disable the automatic geocoding process | ### Default Behavior If no configuration is provided, the plugin will attempt to run on a sheet with the slug "addresses", looking for an "address" field to geocode and populating "latitude" and "longitude" fields. ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import enrichGeocode from "@flatfile/plugin-enrich-geocode"; export default function (listener) { // Assumes a sheet with slug 'addresses' and fields 'address', 'latitude', 'longitude' listener.use(enrichGeocode({})); } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from "@flatfile/listener"; import enrichGeocode from "@flatfile/plugin-enrich-geocode"; export default function (listener: FlatfileListener) { // Assumes a sheet with slug 'addresses' and fields 'address', 'latitude', 'longitude' listener.use(enrichGeocode({})); } ``` ### Custom Configuration ```javascript JavaScript theme={null} import enrichGeocode from "@flatfile/plugin-enrich-geocode"; export default function (listener) { listener.use( enrichGeocode({ sheetSlug: 'contacts', addressField: 'full_address', latitudeField: 'lat', longitudeField: 'lon' }) ); } ``` ```typescript TypeScript theme={null} import type { FlatfileListener } from "@flatfile/listener"; import enrichGeocode from "@flatfile/plugin-enrich-geocode"; export default function (listener: FlatfileListener) { listener.use( enrichGeocode({ sheetSlug: 'contacts', addressField: 'full_address', latitudeField: 'lat', longitudeField: 'lon' }) ); } ``` ### Advanced Usage - Direct API Function ```javascript JavaScript theme={null} import { performGeocoding } from "@flatfile/plugin-enrich-geocode"; async function geocodeSingleAddress(address, apiKey) { console.log(`Geocoding address: ${address}`); const result = await performGeocoding({ address }, apiKey); if ('message' in result) { console.error(`Error: ${result.message}`); } else { console.log(`Coordinates: ${result.latitude}, ${result.longitude}`); console.log(`Formatted Address: ${result.formatted_address}`); } } ``` ```typescript TypeScript theme={null} import { performGeocoding } from "@flatfile/plugin-enrich-geocode"; async function geocodeSingleAddress(address: string, apiKey: string) { console.log(`Geocoding address: ${address}`); const result = await performGeocoding({ address }, apiKey); if ('message' in result) { console.error(`Error: ${result.message}`); } else { console.log(`Coordinates: ${result.latitude}, ${result.longitude}`); console.log(`Formatted Address: ${result.formatted_address}`); } } ``` ## API Reference ### enrichGeocode(config) The main entry point for the plugin. Returns a `bulkRecordHook` compatible with the Flatfile listener's `use()` method. **Parameters:** * `config` (object): Configuration object with the parameters described above **Returns:** A function that can be passed to `listener.use()` and will be executed on the `commit:created` event. ### performGeocoding(input, apiKey) An async function that makes a direct call to the Google Maps Geocoding API for both forward and reverse geocoding. **Parameters:** * `input` (object): Must contain either: * `address` (string): The address to geocode * `latitude` (number) and `longitude` (number): The coordinates to reverse geocode * `apiKey` (string): Your Google Maps Geocoding API key **Returns:** A Promise that resolves to either: **Success (GeocodingResult):** ```javascript theme={null} { latitude: number, longitude: number, formatted_address: string, country?: string, postal_code?: string } ``` **Failure (GeocodingError):** ```javascript theme={null} { message: string, field: string // 'address', 'coordinates', or 'input' } ``` **Example with Error Handling:** ```javascript JavaScript theme={null} import { performGeocoding } from "@flatfile/plugin-enrich-geocode"; async function findCoordinates(apiKey) { const result = await performGeocoding({ address: "Eiffel Tower" }, apiKey); if ('message' in result) { console.error(`Geocoding failed on field '${result.field}': ${result.message}`); } else { console.log(`Coordinates: ${result.latitude}, ${result.longitude}`); } } ``` ```typescript TypeScript theme={null} import { performGeocoding } from "@flatfile/plugin-enrich-geocode"; async function findCoordinates(apiKey: string) { const result = await performGeocoding({ address: "Eiffel Tower" }, apiKey); if ('message' in result) { console.error(`Geocoding failed on field '${result.field}': ${result.message}`); } else { console.log(`Coordinates: ${result.latitude}, ${result.longitude}`); } } ``` ## Notes ### API Key Requirement A valid Google Maps Geocoding API key is required for this plugin to function. The plugin will look for the key in the environment variables as `GOOGLE_MAPS_API_KEY` or in Flatfile secrets with the name `GOOGLE_MAPS_API_KEY`. ### Data Enrichment and Sheet Configuration Upon successful geocoding, the plugin will attempt to set values for the following fields on the record: `latitude`, `longitude`, `formatted_address`, `country`, and `postal_code`. Your Flatfile Sheet must be configured with these fields to store the enriched data. The latitude and longitude fields are configurable; the others are hardcoded. ### Error Handling Pattern If the Google Maps API returns an error (e.g., `ZERO_RESULTS`, `REQUEST_DENIED`) or if the network request fails, the plugin will not halt the import process. Instead, it will add an error message to the specific record and field that caused the issue using `record.addError()`. This allows users to see which records failed to geocode and why directly in the Flatfile UI. ### Event Trigger The plugin is designed to run on the `listener.on('commit:created')` event. This means it processes data after a user has reviewed their data and clicked the final submit button, but before the data is sent to its final destination. --- # Source: https://flatfile.com/docs/plugins/gpx.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # GPX File Parser and Analyzer > Automatically processes GPX (GPS Exchange Format) files to extract waypoints, tracks, routes, and calculate geographic statistics like distance and elevation gain. This plugin automatically processes GPX (GPS Exchange Format) files attached to records in a Flatfile Sheet. When a commit is created, the plugin triggers on a specified Sheet. It reads GPX data from a designated field in a record, parses the XML content, and extracts waypoints, tracks, and routes. The primary purpose is to enrich records with structured data derived from GPX files. It can convert the geographic data into a tabular format, calculate aggregate statistics like total distance and elevation gain, and extract metadata like the name and description of the activity. Use cases include analyzing fitness activities, processing geographic survey data, or managing collections of GPS routes, where users provide GPX files and the system needs to automatically extract and display key information and statistics. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-enrich-gpx ``` ## Configuration & Parameters The plugin is configured with an object containing the following properties, which map to field slugs in your Sheet: ### Required Parameters * **sheetSlug** (`string`): The slug of the sheet the plugin should operate on. * **gpxFileField** (`string`): The field slug in the source record that contains the GPX file content as a string. ### Optional Parameters * **removeDuplicatesField** (`string`): The field slug that acts as a boolean flag. If the value in this field for a given record is the string "true", duplicate points will be removed from the parsed data. * **filterDatesField** (`string`): The field slug that acts as a boolean flag. If the value in this field for a given record is the string "true", the parsed data will be filtered by a date range. * **startDateField** (`string`): The field slug containing the start date for filtering. This is only used if `filterDatesField` is set to "true". The value should be a valid date string. * **endDateField** (`string`): The field slug containing the end date for filtering. This is only used if `filterDatesField` is set to "true". The value should be a valid date string. ### Default Behavior By default, the plugin will parse the GPX file without removing duplicates or filtering by date. These processing steps are only activated if the corresponding fields (`removeDuplicatesField`, `filterDatesField`) in the record are explicitly set to the string "true". If filtering is enabled but the start or end date fields are empty or invalid, the date filter will not be applied. ## Usage Examples ### Basic Setup ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { enrichGpx } from "@flatfile/plugin-enrich-gpx"; export default function (listener) { listener.use(enrichGpx(listener, { sheetSlug: 'gpx-data', gpxFileField: 'gpx_file', removeDuplicatesField: 'remove_duplicates', filterDatesField: 'filter_dates', startDateField: 'start_date', endDateField: 'end_date' })); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { enrichGpx } from "@flatfile/plugin-enrich-gpx"; export default function (listener: FlatfileListener) { listener.use(enrichGpx(listener, { sheetSlug: 'gpx-data', gpxFileField: 'gpx_file', removeDuplicatesField: 'remove_duplicates', filterDatesField: 'filter_dates', startDateField: 'start_date', endDateField: 'end_date' })); } ``` ### Complete Configuration Example ```javascript JavaScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { enrichGpx } from "@flatfile/plugin-enrich-gpx"; export default function (listener) { const config = { sheetSlug: 'gpx-data', // The sheet to target gpxFileField: 'gpx_file', // Field with GPX string content removeDuplicatesField: 'remove_duplicates', // Field to enable/disable deduplication filterDatesField: 'filter_dates', // Field to enable/disable date filtering startDateField: 'start_date', // Field with the filter start date endDateField: 'end_date' // Field with the filter end date }; listener.use(enrichGpx(listener, config)); } ``` ```typescript TypeScript theme={null} import { FlatfileListener } from "@flatfile/listener"; import { enrichGpx, GpxParserConfig } from "@flatfile/plugin-enrich-gpx"; export default function (listener: FlatfileListener) { const config: GpxParserConfig = { sheetSlug: 'gpx-data', // The sheet to target gpxFileField: 'gpx_file', // Field with GPX string content removeDuplicatesField: 'remove_duplicates', // Field to enable/disable deduplication filterDatesField: 'filter_dates', // Field to enable/disable date filtering startDateField: 'start_date', // Field with the filter start date endDateField: 'end_date' // Field with the filter end date }; listener.use(enrichGpx(listener, config)); } ``` ### Using Utility Functions The plugin exports several utility functions that can be used independently for custom data processing tasks: ```javascript JavaScript theme={null} import { calculateDistance } from "@flatfile/plugin-enrich-gpx"; // Define two waypoints const point1 = { latitude: 40.7128, longitude: -74.0060 }; // New York City const point2 = { latitude: 34.0522, longitude: -118.2437 }; // Los Angeles // Calculate the distance between them const distanceInKm = calculateDistance(point1, point2); console.log(`The distance is approximately ${distanceInKm.toFixed(2)} km.`); // Expected output: The distance is approximately 3944.42 km. ``` ```typescript TypeScript theme={null} import { calculateDistance, Waypoint } from "@flatfile/plugin-enrich-gpx"; // Define two waypoints const point1: Waypoint = { latitude: 40.7128, longitude: -74.0060 }; // New York City const point2: Waypoint = { latitude: 34.0522, longitude: -118.2437 }; // Los Angeles // Calculate the distance between them const distanceInKm: number = calculateDistance(point1, point2); console.log(`The distance is approximately ${distanceInKm.toFixed(2)} km.`); // Expected output: The distance is approximately 3944.42 km. ``` ## API Reference ### enrichGpx(listener, config) The main plugin entry point that configures and attaches a record hook to a Flatfile listener. **Parameters:** * `listener`: FlatfileListener - The Flatfile listener instance to attach the hook to * `config`: GpxParserConfig - Configuration object containing field mappings **Returns:** void ### calculateDistance(point1, point2) Calculates the great-circle distance between two geographical points using the Haversine formula. **Parameters:** * `point1`: Waypoint - Object with `latitude` and `longitude` properties * `point2`: Waypoint - Object with `latitude` and `longitude` properties **Returns:** number - The distance between the two points in kilometers ### removeDuplicatePoints(points) Removes duplicate waypoints from an array based on latitude, longitude, elevation, and time. **Parameters:** * `points`: Waypoint\[] - Array of waypoint objects **Returns:** Waypoint\[] - New array with duplicate waypoints removed ### filterByDateRange(points, startDate, endDate) Filters waypoints to include only those within a specified date range. **Parameters:** * `points`: Waypoint\[] - Array of waypoint objects with optional `time` property * `startDate`: Date - Start of the date range * `endDate`: Date - End of the date range **Returns:** Waypoint\[] - Filtered array of waypoints ### calculateStatistics(tabularData) Calculates total distance and cumulative positive elevation gain for a path of waypoints. **Parameters:** * `tabularData`: Waypoint\[] - Sorted array of waypoint objects **Returns:** object - Object with `totalDistance` (km) and `elevationGain` (meters) properties ### convertToTabularFormat(waypoints, tracks, routes) Merges separate arrays of waypoints, tracks, and routes into a single, flat array sorted by time. **Parameters:** * `waypoints`: Waypoint\[] - Array of waypoint objects * `tracks`: Track\[] - Array of track objects * `routes`: Route\[] - Array of route objects **Returns:** Waypoint\[] - Single, sorted array containing all points ## Troubleshooting ### Common Issues * **Data not processing**: Ensure the `sheetSlug` in the configuration exactly matches the slug of your target Sheet * **Field mapping errors**: Verify that all field slugs in the configuration match the field keys in your Sheet template * **Invalid GPX content**: Check that the `gpxFileField` contains valid GPX XML content * **Date filtering not working**: Ensure the `filterDatesField` contains the string "true" and that date fields contain valid date strings ### Error Handling The plugin includes comprehensive error handling: * **Missing GPX content**: Adds error message 'GPX file content is required' to the `gpxFileField` * **Invalid XML**: Adds generic error message 'Failed to parse GPX content' to the `gpxFileField` * **Processing continues**: Records are always returned to ensure other records can be processed ## Notes ### Important Considerations * The plugin operates on the `commit:created` event using a `recordHook` * GPX data must be provided as a complete XML string within the specified field * Boolean-like options are controlled by the string value "true", not native boolean types * The plugin will overwrite values in target fields with extracted GPX data * Processing steps (deduplication, date filtering) are only activated when explicitly enabled ### Limitations * Does not handle file uploads directly, only processes string content from uploads * Requires valid GPX XML format for successful parsing * Date filtering requires valid date strings in the specified format --- # Source: https://flatfile.com/docs/plugins/graphql-schema.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # GraphQL Schema to Flatfile Blueprint Converter > Automatically generate Flatfile Space configurations by converting GraphQL schemas into Workbooks and Sheets, streamlining data import setup for GraphQL APIs. This plugin automates the creation of a Flatfile Space configuration by converting a GraphQL schema into a Flatfile Blueprint. It introspects a given GraphQL source—which can be a live endpoint URL, a static Schema Definition Language (SDL) string, or a GraphQL.js schema object—and generates corresponding Workbooks and Sheets. The primary purpose is to significantly speed up the setup process for developers who need to import data that conforms to an existing GraphQL API. It maps GraphQL object types to Flatfile Sheets and their fields to Flatfile Fields, automatically handling scalar types, object relationships (as reference fields), and non-null constraints. **Use cases include:** * Rapidly scaffolding a data importer for a headless CMS or backend service that exposes a GraphQL API * Creating a consistent data onboarding experience based on a single source of truth (the GraphQL schema) * Migrating data from other systems into an application with a GraphQL-based data model ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-graphql-schema ``` ## Configuration & Parameters The plugin is configured via a `setupFactory` object passed to the `configureSpaceGraphQL` function. ### Main Configuration | Parameter | Type | Required | Description | | ----------- | ------------------------- | -------- | ---------------------------------------------------------------------------- | | `workbooks` | `PartialWorkbookConfig[]` | Yes | Array of workbook configuration objects. Each object generates one workbook. | | `space` | `object` | No | Configuration for the Space itself (e.g., metadata, themes). | | `documents` | `object[]` | No | Array of document configurations to add to the Space. | ### PartialWorkbookConfig | Parameter | Type | Default | Description | | --------- | ------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------- | | `source` | `string \| GraphQLSchema \| function` | Required | GraphQL schema source - can be a URL, SDL string, GraphQLSchema object, or function returning a GraphQLSchema | | `sheets` | `PartialSheetConfig[]` | `undefined` | Optional array to filter and customize generated sheets | | `name` | `string` | `'GraphQL Workbook'` | User-friendly name for the workbook | ### PartialSheetConfig | Parameter | Type | Default | Description | | --------- | -------- | ------------------------------------ | ----------------------------------------------------- | | `slug` | `string` | Required | Sheet slug that must match a GraphQL object type name | | `name` | `string` | `capitalCase` of GraphQL object name | User-friendly name for the sheet | ## Usage Examples ### Basic Usage ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' const listener = FlatfileListener.create((listener) => { listener.on('**', (event) => { console.log(`Event: ${event.topic}`) }) listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'SpaceX Launches', source: 'https://spacex-production.up.railway.app/', }, ], }) ) }) ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' const listener = FlatfileListener.create((listener) => { listener.on('**', (event) => { console.log(`Event: ${event.topic}`) }) listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'SpaceX Launches', source: 'https://spacex-production.up.railway.app/', }, ], }) ) }) ``` ### Filtering Sheets ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' const listener = FlatfileListener.create((listener) => { listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'SpaceX Capsules', source: 'https://spacex-production.up.railway.app/', // Only generate a sheet for the 'Capsule' GraphQL object type sheets: [{ slug: 'Capsule' }], }, ], }) ) }) ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' const listener = FlatfileListener.create((listener) => { listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'SpaceX Capsules', source: 'https://spacex-production.up.railway.app/', // Only generate a sheet for the 'Capsule' GraphQL object type sheets: [{ slug: 'Capsule' }], }, ], }) ) }) ``` ### Advanced Configuration ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' const listener = FlatfileListener.create((listener) => { listener.use( configureSpaceGraphQL( { workbooks: [ { name: 'Custom SpaceX Workbook', source: 'https://spacex-production.up.railway.app/', sheets: [ { name: 'Capsules', // Custom sheet name slug: 'Capsule', actions: [ { operation: 'dedupe', mode: 'background', label: 'Deduplicate Records', }, ], }, ], }, ], space: { metadata: { theme: { root: { primaryColor: 'blue' }, }, }, }, }, async (event, workbookIds, tick) => { const { spaceId } = event.context console.log('Space configured successfully!', { spaceId, workbookIds }) await tick(100, 'Configuration complete') } ) ) }) ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' import type { FlatfileEvent } from '@flatfile/listener' const listener = FlatfileListener.create((listener) => { listener.use( configureSpaceGraphQL( { workbooks: [ { name: 'Custom SpaceX Workbook', source: 'https://spacex-production.up.railway.app/', sheets: [ { name: 'Capsules', // Custom sheet name slug: 'Capsule', actions: [ { operation: 'dedupe', mode: 'background', label: 'Deduplicate Records', }, ], }, ], }, ], space: { metadata: { theme: { root: { primaryColor: 'blue' }, }, }, }, }, async (event: FlatfileEvent, workbookIds: string[], tick) => { const { spaceId } = event.context console.log('Space configured successfully!', { spaceId, workbookIds }) await tick(100, 'Configuration complete') } ) ) }) ``` ### Using Local Schema File ```javascript JavaScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' import fs from 'fs' import path from 'path' const listener = FlatfileListener.create((listener) => { // Read a GraphQL schema from a local file const schemaSDL = fs.readFileSync( path.join(__dirname, 'schema.graphql'), 'utf8' ) listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'My Local Schema Workbook', source: schemaSDL, }, ], }) ) }) ``` ```typescript TypeScript theme={null} import { FlatfileListener } from '@flatfile/listener' import { configureSpaceGraphQL } from '@flatfile/plugin-graphql-schema' import fs from 'fs' import path from 'path' const listener = FlatfileListener.create((listener) => { // Read a GraphQL schema from a local file const schemaSDL = fs.readFileSync( path.join(__dirname, 'schema.graphql'), 'utf8' ) listener.use( configureSpaceGraphQL({ workbooks: [ { name: 'My Local Schema Workbook', source: schemaSDL, }, ], }) ) }) ``` ## Troubleshooting ### Missing Sheets or Fields Check the console logs for messages about "unsupported" field types or missing reference tables. The plugin logs warnings for non-critical issues. ### Invalid GraphQL Source Ensure that any URL provided in the `source` option points to a valid, publicly accessible GraphQL endpoint that responds to introspection queries. ### Sheet Filtering Issues When using the `sheets` filter, verify that the `slug` for each sheet configuration exactly matches the name of the corresponding GraphQL object type. ### Reference Field Problems If reference fields are not working, confirm that the sheet being referenced is also being generated (i.e., it's included in your `sheets` filter or the filter is omitted entirely). ## Notes ### Default Behavior By default, the plugin introspects the entire GraphQL schema from the provided `source`. It creates one sheet for each GraphQL `OBJECT` type, excluding standard types like `Query`, `Mutation`, `Subscription`, and internal types (those starting with `__`). **Field Type Mapping:** * GraphQL scalar types are mapped to Flatfile field types (`Int` → `number`, `String` → `string`) * GraphQL object types are mapped to Flatfile reference fields * If a GraphQL field is `NON_NULL`, a `required` constraint is added ### Supported Field Types The plugin currently supports GraphQL `SCALAR`, `OBJECT`, and `NON_NULL` types. Other GraphQL types like `LIST`, `UNION`, `INTERFACE`, and `ENUM` are not explicitly supported and will be skipped during sheet generation. ### Reference Field Generation For a reference field to be created correctly, the corresponding GraphQL object type must also be generated as a sheet in the same workbook. The plugin automatically selects the first non-reference field from the referenced sheet to use as the relationship key. ### Error Handling The plugin uses console logging to provide feedback. Critical errors, such as failure to fetch or parse the GraphQL schema, will throw an exception, causing the `configureSpace` job to fail. --- # Source: https://flatfile.com/docs/plugins/html-table.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # HTML Table Extractor > Parse HTML files and extract data from tables within them, converting structured data into Flatfile-compatible format This plugin for Flatfile is designed to parse HTML files and extract data from tables within them. Its main purpose is to automatically convert structured data found in HTML `` elements into a format that Flatfile can process. The plugin can handle multiple tables within a single HTML file, creating a separate sheet for each one. It is capable of interpreting complex table layouts that use `colspan` and `rowspan` attributes to merge cells, ensuring the data is correctly aligned. Use cases include importing data from legacy systems that export reports as HTML pages, scraping data from web pages, or processing any structured data provided in an HTML table format. ## Installation Install the plugin using npm: ```bash theme={null} npm install @flatfile/plugin-extract-html-table ``` ## Configuration & Parameters The plugin accepts the following configuration options: | Parameter | Type | Default | Description | | --------------- | ------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | | `handleColspan` | boolean | `true` | When true, the plugin will correctly handle cells with a `colspan` attribute by duplicating the cell's value across the specified number of columns | | `handleRowspan` | boolean | `true` | When true, the plugin will attempt to handle cells with a `rowspan` attribute by carrying the cell's value down into the subsequent rows | | `maxDepth` | number | `3` | Defines the maximum depth for parsing nested tables (Note: not currently implemented) | | `debug` | boolean | `false` | When set to true, the plugin will output detailed logs to the console during the parsing process | ### Default Behavior By default, the plugin processes HTML files with `handleColspan` and `handleRowspan` enabled, meaning it will attempt to correctly structure data from cells that span multiple columns or rows. Debug logging is disabled, and the nesting depth for tables is notionally set to 3. ## Usage Examples ### Basic Usage ```javascript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { HTMLTableExtractor } from '@flatfile/plugin-extract-html-table'; const listener = new FlatfileListener(); // Use the extractor with default options listener.use(HTMLTableExtractor()); ``` ```typescript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { HTMLTableExtractor } from '@flatfile/plugin-extract-html-table'; const listener = new FlatfileListener(); // Use the extractor with default options listener.use(HTMLTableExtractor()); ``` ### Configuration Example ```javascript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { HTMLTableExtractor } from '@flatfile/plugin-extract-html-table'; const listener = new FlatfileListener(); // Use the extractor with custom options listener.use( HTMLTableExtractor({ handleColspan: true, handleRowspan: false, // Disable rowspan handling debug: true // Enable verbose logging for troubleshooting }) ); ``` ```typescript theme={null} import { FlatfileListener } from '@flatfile/listener'; import { HTMLTableExtractor } from '@flatfile/plugin-extract-html-table'; const listener = new FlatfileListener(); // Use the extractor with custom options listener.use( HTMLTableExtractor({ handleColspan: true, handleRowspan: false, // Disable rowspan handling debug: true // Enable verbose logging for troubleshooting }) ); ``` ### Direct Parser Usage This example shows how to use the parser function directly, outside of a Flatfile listener, to process an HTML file: ```javascript theme={null} import * as fs from 'fs'; import { htmlTableParser } from '@flatfile/plugin-extract-html-table'; // Read an HTML file into a buffer const fileBuffer = fs.readFileSync('path/to/your/table.html'); // Define parser options const options = { handleColspan: true, handleRowspan: true, debug: false }; // Parse the buffer to get structured data try { const workbookData = htmlTableParser(fileBuffer, options); console.log('Extracted Workbook:', JSON.stringify(workbookData, null, 2)); } catch (error) { console.error('An error occurred during parsing:', error); } ``` ```typescript theme={null} import * as fs from 'fs'; import { htmlTableParser } from '@flatfile/plugin-extract-html-table'; // Read an HTML file into a buffer const fileBuffer = fs.readFileSync('path/to/your/table.html'); // Define parser options const options = { handleColspan: true, handleRowspan: true, debug: false }; // Parse the buffer to get structured data try { const workbookData = htmlTableParser(fileBuffer, options); console.log('Extracted Workbook:', JSON.stringify(workbookData, null, 2)); } catch (error) { console.error('An error occurred during parsing:', error); } ``` ### Example with HTML Content ```javascript theme={null} import { htmlTableParser } from '@flatfile/plugin-extract-html-table'; const htmlContent = '
NameAge
John30
'; const buffer = Buffer.from(htmlContent, 'utf-8'); const workbook = htmlTableParser(buffer, {}); // workbook will be: // { // "Table_1": { // "headers": ["Name", "Age"], // "data": [{ "Name": { "value": "John" }, "Age": { "value": "30" } }] // } // } ``` ```typescript theme={null} import { htmlTableParser } from '@flatfile/plugin-extract-html-table'; const htmlContent = '
NameAge
John30
'; const buffer = Buffer.from(htmlContent, 'utf-8'); const workbook = htmlTableParser(buffer, {}); // workbook will be: // { // "Table_1": { // "headers": ["Name", "Age"], // "data": [{ "Name": { "value": "John" }, "Age": { "value": "30" } }] // } // } ``` ## Troubleshooting If data is missing or incorrect, enable `debug: true` in the configuration to see a step-by-step log of the parsing process: ```javascript theme={null} listener.use( HTMLTableExtractor({ debug: true }) ); ``` ```typescript theme={null} listener.use( HTMLTableExtractor({ debug: true }) ); ``` Ensure the source HTML file contains well-structured `` elements with `
` tags for headers and `` tags for data cells. The plugin's effectiveness is highly dependent on the quality of the input HTML. ## Notes ### Important Considerations * **Supported File Type**: The plugin is hardcoded to only process files with the `.html` extension * **Event Listener**: It operates on the `listener.on('file:created')` event * **Multiple Tables**: Each `` element found in the HTML document will be extracted into its own separate sheet within the Flatfile workbook. Sheets are named sequentially: `Table_1`, `Table_2`, and so on * **Header Extraction**: Headers are extracted from `
` elements. If a table has no `` elements, the `headers` array for that sheet will be empty, and data rows will likely not be mapped correctly ### Limitations * **`maxDepth` Limitation**: The `maxDepth` configuration option is defined in the options type but is not currently implemented in the parsing logic. Nested tables are processed, but their depth is not limited by this setting * **`rowspan` Implementation**: The current implementation for `handleRowspan` may not function as expected because it attempts to re-parse trimmed text content of a cell to find an attribute, which is not possible. This feature should be considered unreliable ### Error Handling * The primary method for diagnosing issues is to set the `debug` option to `true`. This will print detailed logs of the extraction process, including tables found, headers extracted, and cell data * If a data row contains more cells than there are headers, a warning is logged (in debug mode) and the extra cell data is ignored to prevent data misalignment * The underlying HTML parser is generally resilient to malformed HTML, but if the table structure (``, ``, `
`, ``) is invalid, the function may return an empty object or partially extracted data --- # Source: https://flatfile.com/docs/reference/icons.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Icon Reference > Reference of all available icons Icons may be set for custom action buttons. Here's a list of all available icons that Flatfile supports. ## Usage Example Icons can be used in [Actions](/core-concepts/actions) to provide visual cues for different operations: ```javascript theme={null} const action = { operation: "export-data", label: "Export", icon: "download", description: "Export your data", mode: "background", }; ``` Icons help users quickly identify the purpose of different actions and improve the overall user experience in your Flatfile implementation. Icons may be set for custom action buttons. Here's a list of all available icons that Flatfile supports. ## All icons
add
alertCircle
alertTriangle
arrowLeft
arrowProgress
arrowRight
arrowRotateLeft
arrowsMaximize
arrowUpRight
at
ban
boxLogo
boxOpen
building
calendar
calendarLight
chat
checkmark
checkmarkCircle
checkmarkShield
chevronDoubleDown
chevronDown
chevronRight
clipboardCheck
code
columns
connection
connectionSlash
cross
cubeTransparent
database
diff
documentAdd
documentCopy
dotsVertical
download
edit
enter
eye
eyeSlash
fileCircleInfo
fileCode
filterList
folder
gear
github
googleDrive
grid
history
hourglass
info
key
lightningBolt
link
list
listTimeline
lock
mail
objectGroup
palette
{" "} pen
penSlash
pin
planetRinged
plus
questionCircle
search
selector
sidebar
sink
spaceShip
sparkles
table
tableDot
trash
truck
ufo
upload
user
userAdd
userGroup
userRemove
userSecret
wandSparkles
xCircle
--- # Source: https://flatfile.com/docs/plugins/index.md # Source: https://flatfile.com/docs/index.md > ## Documentation Index > Fetch the complete documentation index at: https://flatfile.com/docs/llms.txt > Use this file to discover all available pages before exploring further. # Welcome to Flatfile If you're ready to dive right in, start here: Build your first data import experience in minutes using our powerful AI tooling Build your first data import experience with code using our robust, event-driven SDK ## What is Flatfile? Flatfile is an AI-powered platform that eliminates the weeks and months typically spent migrating customer data, cutting project timelines and labor hours by over 70%. Build seamless data onboarding experiences with universal file support, effortless mapping, intelligent formatting, and collaborative resolution tools.