3. Select your connector from the available list (e.g., Snowflake, BigQuery, PostgreSQL, etc.).
***
## Step 2: Provide Connection Details
Each data source requires standard connection credentials. These typically include:
* **Source Name** β A descriptive label for your reference.
* **Host / Server URL** β Address of the database or data warehouse.
* **Port Number** β Default or custom port for the connection.
* **Database Name** β The name of the DB you want to access.
* **Authentication Type** β Options like password-based, token, or OAuth.
* **Username & Password / Token** β Credentials for access.
* **Schema (if applicable)** β Filter down to the relevant DB schema.
***
## Step 3: Test the Connection
Click **βTest Connectionβ** to validate that your source credentials are correct and the system can access the data.
> β οΈ Common issues include invalid credentials, incorrect hostnames, or firewall rules blocking access.
***
## Step 4: Save the Source
After successful testing:
* Click **Finish** to finalize the connection.
* The source will now appear under **Data Sources** in your account.
***
## Step 5: Next Steps β Use the Source
Once added, your data source can be used to:
* Create **Data Models** (via SQL editor, dbt, or table selector)
* Build **Syncs** to move transformed data into downstream destinations
* Enable AI apps to reference live or transformed business data
> π Refer to the [Data Modeling](../data-activation/data-modelling) section to begin querying your connected source.
***
## β
You're All Set!
Your data source is now ready for activation. Use it to power AI pipelines, syncs, and application-level insights.
---
# Source: https://docs.squared.ai/guides/destinations/retl-destinations/productivity-tools/airtable.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Airtable
# Destination/Airtable
### Overview
Airtable combines the simplicity of a spreadsheet with the complexity of a database. This cloud-based platform enables users to organize work, manage projects, and automate workflows in a customizable and collaborative environment.
### Prerequisite Requirements
Ensure you have created an Airtable account before you begin. Sign up [here](https://airtable.com/signup) if you haven't already.
### Setup
1. **Generate a Personal Access Token**
Start by generating a personal access token. Follow the guide [here](https://airtable.com/developers/web/guides/personal-access-tokens) for instructions.
2. **Grant Required Scopes**
Assign the following scopes to your token for the necessary permissions:
* `data.records:read`
* `data.records:write`
* `schema.bases:read`
* `schema.bases:write`
---
# Source: https://docs.squared.ai/deployment-and-security/setup/aks.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Azure AKS (Kubernetes)
## Deploying Multiwoven on Azure Kubernetes Service (AKS)
This guide will walk you through setting up Multiwoven on AKS. We'll cover configuring and deploying an AKS cluster after which, you can refer to the Helm Charts section of our guide to install Multiwoven into it.
**Prerequisites**
* An active Azure subscription
* Basic knowledge of Kuberenetes and Helm
**Note:** AKS clusters are not free. Please refer to [https://azure.microsoft.com/en-us/pricing/details/kubernetes-service/#pricing](https://azure.microsoft.com/en-us/pricing/details/kubernetes-service/#pricing) for current pricing information.
**1. AKS Cluster Deployment:**
1. **Select a Resource Group for your deployment:**
* Navigate to your Azure subscription and select a Resource Group or, if necessary, start by creating a new Resource Group.
2. **Initiate AKS Deployment**
* Select the **Create +** button at the top of the overview section of your Resource Group which will take you to the Azure Marketplace.
* In the Azure Marketplace, type **aks** into the search field at the top. Select **Azure Kuberenetes Service (AKS)** and create.
3. **Configure your AKS Cluster**
* **Basics**
* For **Cluster Preset Configuration**, we recommend **Dev/Test** for Development deployments.
* For **Resource Group**, select your Resource Group.
* For **AKS Pricing Tier**, we recommend **Standard**.
* For **Kubernetes version**, we recommend sticking with the current **default**.
* For **Authentication and Authorization**, we recommend **Local accounts with Kubernetes RBAC** for simplicity.
* **Node Pools**
* Leave defaults
* **Networking**
* For **Network Configuration**, we recommend the **Azure CNI** network configuration for simplicity.
* For **Network Policy**, we recommend **Azure**.
* **Integrations**
* Leave defaults
* **Monitoring**
* Leave defaults, however, to reduce costs, you can uncheck **Managed Prometheus** which will automatically uncheck **Managed Grafana**.
* **Advanced**
* Leave defaults
* **Tags**
* Add tags if necessary, otherwise, leave defaults.
* **Review + Create**
* If there are validation errors that arise during the review, like a missed mandatory field, address the errors and create. If there are no validation errors, proceed to create.
* Wait for your deployment to complete before proceeding.
4. **Connecting to your AKS Cluster**
* In the **Overview** section of your AKS cluster, there is a **Connect** button at the top. Choose whichever method suits you best and follow the on-screen instructions. Make sure to run at least one of the test commands to verify that your kubectl commands are being run against your new AKS cluster.
5. **Deploying Multiwoven**
* Please refer to the **Helm Charts** section of our guide to proceed with your installation of Multiwoven!\
[Helm Chart Deployment Guide](https://docs.squared.ai/open-source/guides/setup/helm)
---
# Source: https://docs.squared.ai/guides/sources/data-sources/amazon_s3.md
# Source: https://docs.squared.ai/guides/destinations/retl-destinations/file-storage/amazon_s3.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Amazon S3
## Connect AI Squared to Amazon S3
This guide will help you configure the Amazon S3 Connector in AI Squared to access and transfer data to your S3 bucket.
### Prerequisites
Before proceeding, ensure you have the necessary personal access key, secret access key, region, bucket name, and file path from your S3 account.
## Step-by-Step Guide to Connect to Amazon S3
## Step 1: Navigate to AWS Console
Start by logging into your AWS Management Console.
1. Sign in to your AWS account at [AWS Management Console](https://aws.amazon.com/console/).
## Step 2: Locate AWS Configuration Details
Once you're in the AWS console, you'll find the necessary configuration details:
1. **Access Key and Secret Access Key:**
* Click on your username at the top right corner of the AWS Management Console.
* Choose "Security Credentials" from the dropdown menu.
* In the "Access keys" section, you can create or view your access keys.
* If you haven't created an access key pair before, click on "Create access key" to generate a new one. Make sure to copy the Access Key ID and Secret Access Key as they are shown only once.
2. **Region:**
* The AWS region can be selected from the top right corner of the AWS Management Console. Choose the region where your AWS Sagemaker resources is located and note down the region.
3. **Bucket Name:**
* The S3 Bucket name can be found by selecting "General purpose buckets" on the left hand corner of the S3 Console. From there select the bucket you want to use and note down its name.
4. **File Path**
* After select your S3 bucket you can create a folder where you want your file to be stored or use an exist one.
## Step 3: Configure Amazon S3 Connector in Your Application
Now that you have gathered all the necessary details, enter the following information in your application:
* **Personal Access Key:** Your AWS IAM user's Access Key ID.
* **Secret Access Key:** The corresponding Secret Access Key.
* **Region:** The AWS region where your Sagemaker resources are located.
* **Bucket Name:** The Amazon S3 Bucket you want to access.
* **File Path:** The Path to the directory where files will be written.
* **File Name:** The Name of the file to be written.
## Step 4: Test the Amazon S3 Connection
After configuring the connector in your application:
1. Save the configuration settings.
2. Test the connection to Amazon S3 from your application to ensure everything is set up correctly.
By following these steps, youβve successfully set up an Amazon S3 destination connector in AI Squared. You can now efficiently transfer data to your Amazon S3 endpoint for storage or further distribution within AI Squared.
### Supported sync modes
| Mode | Supported (Yes/No/Coming soon) |
| ---------------- | ------------------------------ |
| Incremental sync | YES |
| Full refresh | Coming soon |
This guide will help you seamlessly connect your AI Squared application to MariaDB, enabling you to leverage your database's full potential.
---
# Source: https://docs.squared.ai/guides/destinations/retl-destinations/analytics/amplitude.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Amplitude
---
# Source: https://docs.squared.ai/activation/ai-ml-sources/anthropic-model.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Anthropic Model
## Connect AI Squared to Anthropic Model
This guide will help you configure the Anthropic Model Connector in AI Squared to access your Anthropic Model Endpoint.
### Prerequisites
Before proceeding, ensure you have the necessary API key from Anthropic.
## Step-by-Step Guide to Connect to an Anthropic Model Endpoint
## Step 1: Navigate to Anthropic Console
Start by logging into your Anthropic Console.
1. Sign in to your Anthropic account at [Anthropic](https://console.anthropic.com/dashboard).
## Step 2: Locate API keys
Once you're in the Anthropic, you'll find the necessary configuration details:
1. **API Key:**
* Click on "API keys" to view your API keys.
* If you haven't created an API Key before, click on "Create API key" to generate a new one. Make sure to copy the API Key as they are shown only once.
## Step 3: Configure Anthropic Model Connector in Your Application
Now that you have gathered all the necessary details enter the following information:
* **API Key:** Your Anthropic API key.
## Sample Request and Response
2. **Region:**
* The AWS region can be selected from the top right corner of the AWS Management Console. Choose the region where your AWS Bedrock resources is located and note down the region.
3. **Inference Profile ARN:**
* The Inference Profile ARN is in the Cross-region inference page and can be found in your selected model.
4. **Model ID:**
* The AWS Model Id can be found in your selected models catalog.
## Step 3: Configure AWS Bedrock Model Connector in Your Application
Now that you have gathered all the necessary details enter the following information:
* **Access Key ID:** Your AWS IAM user's Access Key ID.
* **Secret Access Key:** The corresponding Secret Access Key.
* **Region:** The AWS region where your Bedrock model are located.
* **Inference Profile ARN:** Inference Profile ARN for Model in AWS Bedrock.
* **Model ID:** The Model ID.
## Sample Request and Response
2. **Region:**
* The AWS region can be selected from the top right corner of the AWS Management Console. Choose the region where your AWS Sagemaker resources is located and note down the region.
## Step 3: Configure AWS Sagemaker Model Connector in Your Application
Now that you have gathered all the necessary details enter the following information:
* **Access Key ID:** Your AWS IAM user's Access Key ID.
* **Secret Access Key:** The corresponding Secret Access Key.
* **Region:** The AWS region where your Sagemaker resources are located.
---
# Source: https://docs.squared.ai/faqs/billing-and-account.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Billing & Account
---
# Source: https://docs.squared.ai/guides/sources/data-sources/bquery.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Google Big Query
## Connect AI Squared to BigQuery
This guide will help you configure the BigQuery Connector in AI Squared to access and use your BigQuery data.
### Prerequisites
Before you begin, you'll need to:
1. **Enable BigQuery API and Locate Dataset(s):**
* Log in to the [Google Developers Console](https://console.cloud.google.com/apis/dashboard).
* If you don't have a project, create one.
* Enable the [BigQuery API for your project](https://console.cloud.google.com/flows/enableapi?apiid=bigquery&_ga=2.71379221.724057513.1673650275-1611021579.1664923822&_gac=1.213641504.1673650813.EAIaIQobChMIt9GagtPF_AIVkgB9Ch331QRREAAYASAAEgJfrfD_BwE).
* Copy your Project ID.
* Find the Project ID and Dataset ID of your BigQuery datasets. You can find this by querying the `INFORMATION_SCHEMA.SCHEMATA` view or by visiting the Google Cloud web console.
2. **Create a Service Account:**
* Follow the instructions in our [Google Cloud Provider (GCP) documentation](https://cloud.google.com/iam/docs/service-accounts-create) to create a service account.
3. **Grant Access:**
* In the Google Cloud web console, navigate to the [IAM](https://console.cloud.google.com/iam-admin/iam?supportedpurview=project,folder,organizationId) & Admin section and select IAM.
* Find your service account and click on edit.
* Go to the "Assign Roles" tab and click "Add another role".
* Search and select the "BigQuery User" and "BigQuery Data Viewer" roles.
* Click "Save".
4. **Download JSON Key File:**
* In the Google Cloud web console, navigate to the [IAM](https://console.cloud.google.com/iam-admin/iam?supportedpurview=project,folder,organizationId) & Admin section and select IAM.
* Find your service account and click on it.
* Go to the "Keys" tab and click "Add Key".
* Select "Create new key" and choose JSON format.
* Click "Download".
### Steps
### Authentication
Authentication is supported via the following:
* **Dataset ID and JSON Key File**
* **[Dataset ID](https://cloud.google.com/bigquery/docs/datasets):** The ID of the dataset within Google BigQuery that you want to access. This can be found in Step 1.
* **[JSON Key File](https://cloud.google.com/iam/docs/keys-create-delete):** The JSON key file containing the authentication credentials for your service account.
### Supported sync modes
| Mode | Supported (Yes/No/Coming soon) |
| ---------------- | ------------------------------ |
| Incremental sync | YES |
| Full refresh | Coming soon |
---
# Source: https://docs.squared.ai/guides/destinations/retl-destinations/marketing-automation/braze.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Braze
---
# Source: https://docs.squared.ai/api-reference/connector_definitions/check_connection.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Check Connection
## OpenAPI
````yaml POST /api/v1/connector_definitions/check_connection
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/connector_definitions/check_connection:
post:
tags:
- Connector Definitions
summary: Checks the connection for a specified connector definition
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
type:
type: string
enum:
- source
- destination
name:
type: string
connection_spec:
type: object
description: >-
Generic connection specification structure. Specifics depend
on the connector type.
additionalProperties: true
responses:
'200':
description: Connection check successful
content:
application/json:
schema:
type: object
properties:
result:
type: string
enum:
- success
- failure
details:
type: string
additionalProperties: false
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/guides/destinations/retl-destinations/marketing-automation/clevertap.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# CleverTap
---
# Source: https://docs.squared.ai/guides/sources/data-sources/clickhouse.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# ClickHouse
## Connect AI Squared to ClickHouse
This guide will help you configure the ClickHouse Connector in AI Squared to access and use your ClickHouse data.
### Prerequisites
Before proceeding, ensure you have the necessary URL, username, and password from ClickHouse.
## Step-by-Step Guide to Connect to ClickHouse
## Step 1: Navigate to ClickHouse Console
Start by logging into your ClickHouse Management Console and navigating to the ClickHouse service.
1. Sign in to your ClickHouse account at [ClickHouse](https://clickhouse.com/).
2. In the ClickHouse console, select the service you want to connect to.
## Step 2: Locate ClickHouse Configuration Details
Once you're in the ClickHouse console, you'll find the necessary configuration details:
1. **HTTP Interface URL:**
* Click on the "Connect" button in your ClickHouse service.
* In "Connect with" select HTTPS.
* Find the HTTP interface URL, which typically looks like `http://
## Sources: The Foundation of Data
### Overview
Sources are the starting points of your data journey. It's where all your data is stored and where AI Squared pulls data from.
These can be:
* **Data Warehouses**: For example, `Snowflake` `Google BigQuery` and `Amazon Redshift`
* **Databases and Files**: Including traditional databases, `CSV files`, `SFTP`
### Adding a Source
To integrate a source with AI Squared, navigate to the Sources overview page and select 'Add source'.
## Destinations: Where Data Finds Value
### Overview
'Destinations' in AI Squared are business tools where you want to send your data stored in sources.
These can be:
* **CRM Systems**: Like Salesforce, HubSpot, etc.
* **Advertising Platforms**: Such as Google Ads, Facebook Ads, etc.
* **Marketing Tools**: Braze and Klaviyo, for example
### Integrating a Destination
Add a destination by going to the Destinations page and clicking 'Add destination'.
## Models: Shaping Your Data
### Overview
'Models' in AI Squared determine the data you wish to sync from a source to a destination. They are the building blocks of your data pipeline.
They can be defined through:
* **SQL Editor**: For customized queries
* **Visual Table Selector**: For intuitive interface
* **Existing dbt Models or Looker Looks**: Leveraging pre-built models
### Importance of a Unique Primary Key
Every model must have a unique primary key to ensure each data entry is distinct, crucial for data tracking and updating.
## Syncs: Customizing Data Flow
### Overview
'Syncs' in AI Squared helps you move data from sources to destinations. They help you in mapping the data from your models to the destination.
There are two types of syncs:
* **Full Refresh Sync**: All data is synced from the source to the destination.
* **Incremental Sync**: Only the new or updated data is synced.
---
# Source: https://docs.squared.ai/activation/data-apps/visualizations/create-data-app.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Create a Data App
> Step-by-step guide to building and configuring a Data App in AI Squared.
A **Data App** allows you to visualize and embed AI model predictions into business applications. This guide walks through the setup steps to publish your first Data App using a connected AI/ML model.
***
## Step 1: Select a Model
1. Navigate to **Data Apps** from the sidebar.
2. Click **Create New Data App**.
3. Select the AI model you want to connect from the dropdown list.
* Only models with input and output schemas defined will appear here.
***
## Step 2: Choose Display Type
Choose how the AI output will be displayed:
* **Table**: For listing multiple rows of output
* **Bar Chart** / **Pie Chart**: For aggregate or category-based insights
* **Text Card**: For single prediction or summary output
Each display type supports basic customization (e.g., column order, labels, units).
***
## Step 3: Customize Appearance
You can optionally style the Data App to match your brand:
* Modify font styles, background colors, and borders
* Add custom labels or tooltips
* Choose dark/light mode compatibility
> π Custom CSS is not supported; visual changes are made through the built-in configuration options.
***
## Step 4: Configure Feedback (Optional)
Enable in-app feedback collection for business users interacting with the app:
* **Thumbs Up / Down**
* **Rating Scale (1β5, configurable)**
* **Text Comments**
* **Predefined Options (Multi-select)**
Feedback will be collected and visible under **Reports > Data Apps Reports**.
***
## Step 5: Save & Preview
1. Click **Save** to create the Data App.
2. Use the **Preview** mode to validate how the results and layout look.
3. If needed, go back to edit layout or display type.
***
## Next Steps
* π [Embed in Business Apps](../embed-in-business-apps): Learn how to add the Data App to CRMs or other tools.
* π [Feedback & Ratings](../feedback-and-ratings): Set up capture options and monitor usage.
---
# Source: https://docs.squared.ai/api-reference/models/create-model.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Create Model
## OpenAPI
````yaml POST /api/v1/models
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/models:
post:
tags:
- Models
summary: Creates a model
parameters: []
requestBody:
content:
application/json:
schema:
type: object
properties:
model:
type: object
properties:
name:
type: string
description:
type: string
query:
type: string
query_type:
type: string
configuration:
type: object
primary_key:
type: string
connector_id:
type: integer
required:
- connector_id
- name
- query_type
responses:
'201':
description: Model created
content:
application/json:
schema:
type: object
properties:
data:
type: object
properties:
id:
type: string
type:
type: string
attributes:
type: object
properties:
name:
type: string
description:
type: string
query:
type: string
query_type:
type: string
configuration:
type: object
primary_key:
type: string
connector_id:
type: integer
created_at:
type: string
format: date-time
updated_at:
type: string
format: date-time
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/api-reference/catalogs/create_catalog.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Create Catalog
## OpenAPI
````yaml POST /api/v1/catalogs
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/catalogs:
post:
tags:
- Catalogs
summary: Create catalog
operationId: createCatalog
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
connector_id:
type: integer
example: 6
catalog:
type: object
properties:
json_schema:
type: object
example:
key: value
responses:
'200':
description: Successful response
content:
application/json:
schema:
type: object
properties:
id:
type: integer
example: 123
connector_id:
type: integer
example: 6
workspace_id:
type: integer
example: 2
catalog:
type: object
properties:
json_schema:
type: object
example:
key: value
created_at:
type: string
format: date-time
example: '2023-08-20T15:28:00Z'
updated_at:
type: string
format: date-time
example: '2023-08-20T15:28:00Z'
'400':
description: Bad Request
'401':
description: Unauthorized
'500':
description: Internal Server Error
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/api-reference/connectors/create_connector.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Create Connector
## OpenAPI
````yaml POST /api/v1/connectors
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/connectors:
post:
tags:
- Connectors
summary: Creates a connector
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
connector:
type: object
properties:
name:
type: string
connector_type:
type: string
enum:
- source
- destination
connector_name:
type: string
configuration:
type: object
description: >-
Configuration details for the connector. Structure
depends on the connector definition.
additionalProperties: true
required:
- name
- connector_type
- connector_name
- configuration
responses:
'201':
description: Connector created
content:
application/json:
schema:
type: object
properties:
data:
type: object
properties:
id:
type: string
type:
type: string
attributes:
type: object
properties:
name:
type: string
connector_type:
type: string
workspace_id:
type: integer
created_at:
type: string
format: date-time
updated_at:
type: string
format: date-time
configuration:
type: object
description: Specific configuration of the created connector.
additionalProperties: true
connector_name:
type: string
icon:
type: string
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/api-reference/syncs/create_sync.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Create Sync
## OpenAPI
````yaml POST /api/v1/syncs
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/syncs:
post:
tags:
- Syncs
summary: Create a new sync operation
operationId: createSync
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
sync:
type: object
properties:
source_id:
type: integer
destination_id:
type: integer
model_id:
type: integer
schedule_type:
type: string
enum:
- automated
configuration:
type: object
additionalProperties: true
stream_name:
type: string
sync_mode:
type: string
enum:
- full_refresh
sync_interval:
type: integer
sync_interval_unit:
type: string
enum:
- minutes
cron_expression:
type: string
cursor_field:
type: string
required:
- source_id
- destination_id
- model_id
- schedule_type
- configuration
- stream_name
- sync_mode
- sync_interval
- sync_interval_unit
responses:
'200':
description: Sync operation created successfully
content:
application/json:
schema:
type: object
properties:
id:
type: string
type:
type: string
enum:
- syncs
attributes:
type: object
properties:
source_id:
type: integer
destination_id:
type: integer
model_id:
type: integer
configuration:
type: object
additionalProperties: true
schedule_type:
type: string
enum:
- automated
sync_mode:
type: string
enum:
- full_refresh
sync_interval:
type: integer
sync_interval_unit:
type: string
enum:
- minutes
cron_expression:
type: string
cursor_field:
type: string
stream_name:
type: string
status:
type: string
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/guides/data-modelling/sync-modes/cursor-incremental.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Incremental - Cursor Field
> Incremental Cursor Field sync transfers only new or updated data, minimizing data transfer using a cursor field.
### Overview
Default Incremental Sync fetches all records from the source system and transfers only the new or updated ones to the destination. However, to optimize data transfer and reduce the number of duplicate fetches from the source, we implemented Incremental Sync with Cursor Field for those sources that support cursor fields
#### Cursor Field
A Cursor Field must be clearly defined within the dataset schema. It is identified based on its suitability for comparison and tracking changes over time.
* It serves as a marker to identify modified or added records since the previous sync.
* It facilitates efficient data retrieval by enabling the source to resume from where it left off during the last sync.
Note: Currently, only date fields are supported as Cursor Fields.
####
#### Sync Run 1
During the first sync run with the cursor field 'UpdatedAt', suppose we have the following data:
cursor field UpdatedAt value is 2024-04-20 10:00:00
| Name | Plan | Updated At |
| ---------------- | ---- | ------------------- |
| Charles Beaumont | free | 2024-04-20 10:00:00 |
| Eleanor Villiers | free | 2024-04-20 11:00:00 |
During this sync run, both Charles Beaumont's and Eleanor Villiers' records would meet the criteria since they both have an 'UpdatedAt' timestamp equal to '2024-04-20 10:00:00' or later. So, during the first sync run, both records would indeed be considered and fetched.
##### Query
```sql theme={null}
SELECT * FROM source_table
WHERE updated_at >= '2024-04-20 10:00:00';
```
#### Sync Run 2
Now cursor field UpdatedAt value is 2024-04-20 11:00:00
Suppose after some time, there are further updates in the source data:
| Name | Plan | Updated At |
| ---------------- | ---- | ------------------- |
| Charles Beaumont | free | 2024-04-20 10:00:00 |
| Eleanor Villiers | paid | 2024-04-21 10:00:00 |
During the second sync run with the same cursor field, only the records for Eleanor Villiers with 'Updated At' timestamp after the last sync would be fetched, ensuring minimal data transfer.
##### Query
```sql theme={null}
SELECT * FROM source_table
WHERE updated_at >= '2024-04-20 11:00:00';
```
#### Sync Run 3
If there are additional updates in the source data:
Now cursor field UpdatedAt value is 2024-04-21 10:00:00
| Name | Plan | Updated At |
| ---------------- | ---- | ------------------- |
| Charles Beaumont | paid | 2024-04-22 08:00:00 |
| Eleanor Villiers | pro | 2024-04-22 09:00:00 |
During the third sync run with the same cursor field, only the records for Charles Beaumont and Eleanor Villiers with 'Updated At' timestamp after the last sync would be fetched, continuing the process of minimal data transfer.
##### Query
```sql theme={null}
SELECT * FROM source_table
WHERE updated_at >= '2024-04-21 10:00:00 ';
```
### Handling Ambiguity and Inclusive Cursors
When syncing data incrementally, we ensure at least one delivery. Limited cursor field granularity may cause sources to resend previously sent data. For example, if a cursor only tracks dates, distinguishing new from old data on the same day becomes unclear.
#### Scenario
Imagine sales transactions with a cursor field `transaction_date`. If we sync on April 1st and later sync on the same day, distinguishing new transactions becomes ambiguous. To mitigate this, we guarantee at least one delivery, allowing sources to resend data as needed.
### Known Limitations
Modifications to underlying records without updating the cursor field may result in updated records not being picked up by the Incremental sync as expected.
Edit or remove of cursor field can mess up tracking data changes, causing issues and data loss. So Don't change or remove the cursor field to keep sync smooth and reliable.
---
# Source: https://docs.squared.ai/faqs/data-and-ai-integration.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Data & AI Integration
This section addresses frequently asked questions when connecting data sources, setting up AI/ML model endpoints, or troubleshooting integration issues within AI Squared.
***
## Data Source Integration
### Why is my data source connection failing?
* Verify that the connection credentials (e.g., host, port, username, password) are correct.
* Ensure that the network/firewall rules allow connections to AI Squaredβs IPs (for on-prem data).
* Check if the database is online and reachable.
### What formats are supported for ingesting data?
* AI Squared supports connections to major databases like Snowflake, BigQuery, PostgreSQL, Oracle, and more.
* Files such as CSV, Excel, and JSON can be ingested via SFTP or cloud storage (e.g., S3).
***
## AI/ML Model Integration
### How do I connect my hosted model?
* Use the [Add AI/ML Source](/activation/ai-modelling/connect-source) guide to define your model endpoint.
* Provide input/output schema details so the platform can handle data mapping effectively.
### What types of model endpoints are supported?
* REST-based hosted models with JSON payloads
* Hosted services like AWS SageMaker, Vertex AI, and custom HTTP endpoints
***
## Sync & Schema Issues
### Why is my sync failing?
* Confirm that your data model and sync mapping are valid
* Check that input types in your model schema match your data source fields
* Review logs for any missing fields or payload mismatches
### How can I test if my connection is working?
* Use the βTest Connectionβ button when setting up a new source or sync.
* If testing fails, examine error messages and retry with updated configs.
***
---
# Source: https://docs.squared.ai/faqs/data-apps.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Data Apps
---
# Source: https://docs.squared.ai/guides/sources/data-sources/databricks-model.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Databricks Model
### Overview
AI Squared enables you to transfer data from a Databricks Model to various destinations or data apps. This guide explains how to obtain your Databricks Model URL and connect to AI Squared using your credentials.
### Setup
## Step 3: Test the Databricks Connection
After configuring the connector in your application:
1. Save the configuration settings.
2. Test the connection to Databricks from the AI Squared platform to ensure a connection is made.
By following these steps, youβve successfully set up a Databricks destination connector in AI Squared. You can now efficiently transfer data to your Databricks endpoint for storage or further distribution within AI Squared.
### Supported sync modes
| Mode | Supported (Yes/No/Coming soon) |
| ---------------- | ------------------------------ |
| Incremental sync | YES |
| Full refresh | Coming soon |
Follow these steps to configure and test your Databricks connector successfully.
---
# Source: https://docs.squared.ai/api-reference/models/delete-model.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete Model
## OpenAPI
````yaml DELETE /api/v1/models/{id}
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/models/{id}:
delete:
tags:
- Models
summary: Deletes a model
parameters:
- name: id
in: path
required: true
schema:
type: integer
responses:
'204':
description: Model deleted
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/api-reference/connectors/delete_connector.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete Connector
## OpenAPI
````yaml DELETE /api/v1/connectors/{id}
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/connectors/{id}:
delete:
tags:
- Connectors
summary: Deletes a specific connector by ID
parameters:
- name: id
in: path
required: true
schema:
type: string
description: Unique ID of the connector
responses:
'204':
description: No content, indicating successful deletion
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/api-reference/syncs/delete_sync.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete Sync
## OpenAPI
````yaml DELETE /api/v1/syncs/{id}
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/syncs/{id}:
delete:
tags:
- Syncs
summary: Delete a specific sync operation
operationId: deleteSync
parameters:
- name: id
in: path
required: true
schema:
type: string
description: The ID of the sync operation to delete
responses:
'204':
description: No content, indicating the sync operation was successfully deleted
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/faqs/deployment-and-security.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Deployment & Security
---
# Source: https://docs.squared.ai/api-reference/connectors/discover.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Connector Catalog
## OpenAPI
````yaml GET /api/v1/connectors/{id}/discover
openapi: 3.0.1
info:
title: AI Squared API
version: 1.0.0
servers:
- url: https://api.squared.ai
security: []
paths:
/api/v1/connectors/{id}/discover:
get:
tags:
- Connectors
summary: Discovers catalog information for a specified connector
parameters:
- name: id
in: path
required: true
schema:
type: string
description: Unique ID of the connector
- name: refresh
in: query
required: false
schema:
type: boolean
description: Set to true to force refresh the catalog
responses:
'200':
description: Catalog information for the connector
content:
application/json:
schema:
type: object
properties:
data:
type: object
properties:
id:
type: string
type:
type: string
attributes:
type: object
properties:
connector_id:
type: integer
workspace_id:
type: integer
catalog:
type: object
properties:
streams:
type: array
description: >-
Array of stream objects, varying based on
connector ID.
items:
type: object
properties:
name:
type: string
action:
type: string
json_schema:
type: object
additionalProperties: true
url:
type: string
request_method:
type: string
catalog_hash:
type: string
additionalProperties: false
security:
- bearerAuth: []
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
````
---
# Source: https://docs.squared.ai/deployment-and-security/setup/docker-compose-dev.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Docker
1.2 Unless your organization has created a Private CA (Certificate Authority), we recommend requesting a public certificate.
1.3 Request a single ACM certificate that can verify all three of your chosen subdomains for this deployment. DNS validation is recommended for automatic rotation of your certificate but this method requires access to your domain's DNS record set.
1.4 Once you have added your selected sub-domains, scroll down and click **Request**.
5. Once your request has been made, you will be taken to a page that will describe your certificate request and its current status. Scroll down a bit and you will see a section labeled **Domains** with 3 subdomains and 1 CNAME validation record for each. These records need to be added to your DNS record set. Please refer to your organization's internal documentation or the documentation of your DNS service for further instruction on how to add DNS records to your domain's record set.
**Note:** For automatic certificate rotation, you need to leave these records
in your record set. If they are removed, automatic rotation will fail.
6. Once your ACM certificate has been issued, note the ARN of your certificate and proceed.
**2. Create and Configure Application Load Balancer and Target Groups**
1. In the AWS Management Console, navigate to the EC2 Dashboard and select **Load Balancers**.
{" "}
2. On the next screen select **Create** under **Application Load Balancer**.
{" "}
3. Under **Basic configuration** name your load balancer. If you are deploying this application within a private network, select **Internal**. Otherwise, select **Internet-facing**. Consult with your internal Networking team if you are unsure as this setting can not be changed post-deployment and you will need to create an entirely new load balancer to correct this.
{" "}
4. Under **Network mapping**, select a VPC and write it down somewhere for later use. Also, select 2 subnets (2 are **required** for an Application Load Balancer) and write them down too for later use.
5. Under **Security groups**, select the link to **create a new security group** and a new tab will open.
6. Under **Basic details**, name your security group and provide a description. Be sure to pick the same VPC that you selected for your load balancer configuration.
7. Under **Inbound rules**, create rules for HTTP and HTTPS and set the source for both rules to **Anywhere**. This will expose inbound ports 80 and 443 on the load balancer. Leave the default **Outbound rules** allowing for all outbound traffic for simplicity. Scroll down and select **Create security group**.
8. Once the security group has been created, close the security group tab and return to the load balancer tab. On the load balancer tab, in the **Security groups** section, hit the refresh icon and select your newly created security group. If the VPC's **default security group** gets appended automatically, be sure to remove it before proceeding.
9. Under **Listeners and routing** in the card for **Listener HTTP:80**, select **Create target group**. A new tab will open.
10. Under **Basic configuration**, select **Instances**.
11. Scroll down and name your target group. This first one will be for the Platform's web app so you should name it accordingly. Leave the protocol set to HTTP **but** change the port value to 8000. Also, make sure that the pre-selected VPC matches the VPC that you selected for the load balancer. Scroll down and click **Next**.
12. Leave all defaults on the next screen, scroll down and select **Create target group**. Repeat this process 2 more times, once for the **Platform API** on **port 3000** and again for **Temporal UI** on **port 8080**. You should now have 3 target groups.
13. Navigate back to the load balancer configuration screen and hit the refresh button in the card for **Listener HTTP:80**. Now, in the target group dropdown, you should see your 3 new target groups. For now, select any one of them. There will be some further configuration needed after the creation of the load balancer.
14. Now, click **Add listener**.
15. Change the protocol to HTTPS and in the target group dropdown, again, select any one of the target groups that you previously created.
16. Scroll down to the **Secure listener settings**. Under **Default SSL/TLS server certificate**, select **From ACM** and in the **Select a certificate** dropdown, select the certificate that you created in Step 1. In the dropdown, your certificate will only show the first subdomain that you listed when you created the certificate request. This is expected behavior.
**Note:** If you do not see your certificate in the dropdown list, the most likely issues are:
17. Scroll down to the bottom of the page and click **Create load balancer**. Load balancers take a while to create, approximately 10 minutes or more. However, while the load balancer is creating, copy the DNS name of the load balancer and create CNAME records in your DNS record set, pointing all 3 of your chosen subdomains to the DNS name of the load balancer. Until you complete this step, the deployment will not work as expected. You can proceed with the final steps of the deployment but you need to create those CNAME records.
18. At the bottom of the details page for your load balancer, you will see the section **Listeners and rules**. Click on the listener labeled **HTTP:80**.
19. Check the box next to the **Default** rule and click the **Actions** dropdown.
20. Scroll down to **Routing actions** and select **Redirect to URL**. Leave **URI parts** selected. In the **Protocol** dropdown, select **HTTPS** and set the port value to **443**. This configuration step will automatically redirect all insecure requests to the load balancer on port 80 (HTTP) to port 443 (HTTPS). Scroll to the bottom and click **Save**.
21. Return to the load balancer's configuration page (screenshot in step 18) and scroll back down to the *Listeners and rules* section. This time, click the listener labled **HTTPS:443**.
22. Click **Add rule**.
23. On the next page, you can optionally add a name to this new rule. Click **Next**.
24. On the next page, click **Add condition**. In the **Add condition** pop-up, select **Host header** from the dropdown. For the host header, put the subdomain that you selected for the Platform web app and click **Confirm** and then click **Next**.
25. One the next page, under **Actions**. Leave the **Routing actions** set to **Forward to target groups**. From the **Target group** dropdown, select the target group that you created for the web app. Click **Next**.
26. On the next page, you can set the **Priority** to 1 and click **Next**.
27. On the next page, click **Create**.
28. Repeat steps 24 - 27 for the **api** (priority 2) and **temporal ui** (priority 3).
29. Optionally, you can also edit the default rule so that it **Returns a fixed response**. The default **Response code** of 503 is fine.
**3. Launch EC2 Instance**
1. Navigate to the EC2 Dashboard and click **Launch Instance**.
2. Name your instance and select **Ubuntu 22.04 or later** with **64-bit** architecture.
3. For instance type, we recommend **t3.large**. You can find EC2 on-demand pricing here: [EC2 Instance On-Demand Pricing](https://aws.amazon.com/ec2/pricing/on-demand). Also, create a **key pair** or select a pre-existing one as you will need it to SSH into the instance later.
4. Under **Network settings**, click **Edit**.
5. First, verify that the listed **VPC** is the same one that you selected for the load balancer. Also, verify that the pre-selected subnet is one of the two that you selected earlier for the load balancer as well. If either is incorrect, make the necessary changes. If you are using **private subnets** because your load balancer is **internal**, you do not need to auto-assign a public IP. However, if you chose **internet-facing**, you may need to associate a public IP address with your instance so you can SSH into it from your local machine.
6. Under **Firewall (security groups)**, we recommend that you name the security group but this is optional. After naming the security security group, click the button \*Add security group rule\*\* 3 times to create 3 additional rules.
7. In the first new rule (rule 2), set the port to **3000**. Click the **Source** input box and scroll down until you see the security group that you previously created for the load balancer. Doing this will firewall inbound traffic to port 3000 on the EC2 instance, only allowing inbound traffic from the load balancer that you created earlier. Do the same for rules 3 and 4, using ports 8000 and 8080 respectively.
8. Scroll to the bottom of the screen and click on **Advanced Details**.
9. In the **User data** box, paste the following to automate the installation of **Docker** and **docker-compose**.
```
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
sudo mkdir ais
cd ais
# install docker
sudo apt-get update
yes Y | sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
echo | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
yes Y | sudo apt-get install docker-ce
sudo systemctl status docker --no-pager && echo "Docker status checked"
# install docker-compose
sudo apt-get install -y jq
VERSION=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r .tag_name)
sudo curl -L "https://github.com/docker/compose/releases/download/${VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
sudo systemctl enable docker
```
10. In the right-hand panel, click **Launch instance**.
**4. Register EC2 Instance in Target Groups**
1. Navigate back to the EC2 Dashboard and in the left panel, scroll down to **Target groups**.
2. Click on the name of the first listed target group.
3. Under **Registered targets**, click **Register targets**.
4. Under **Available instances**, you should see the instance that you just created. Check the tick-box next to the instance and click **Include as pending below**. Once the instance shows in **Review targets**, click **Register pending targets**.
5. **Repeat steps 2 - 4 for the remaining 2 target groups.**
**5. Deploy AIS Platform**
1. SSH into the EC2 instance that you created earlier. For assistance, you can navigate to your EC2 instance in the EC2 dashboard and click the **Connect** button. In the **Connect to instance** screen, click on **SSH client** and follow the instructions on the screen.
2. Verify that **Docker** and **docker-compose** were successfully installed by running the following commands
```
sudo docker --version
sudo docker-compose --version
```
You should see something similar to
3. Change directory to the **ais** directory and download the AIS Platform docker-compose file and the corresponding .env file.
```
cd \ais
sudo curl -LO https://multiwoven-deployments.s3.amazonaws.com/docker/docker-compose/docker-compose.yaml
sudo curl -LO https://multiwoven-deployments.s3.amazonaws.com/docker/docker-compose/.env.production && sudo mv /ais/.env.production /ais/.env
```
Verify the downloads
```
ls -a
```
You should see the following
4. You will need to edit both files a little before deploying. First open the .env file.
```
sudo nano .env
```
**There are 3 required changes.**
Commands to save and exit **nano**.
After changes
6. Deploy the AIS Platform. This step requires a private repository access key that you should have received from your AIS point of contact. If you do not have one, please reach out to AIS.
```
DOCKERHUB_USERNAME="multiwoven"
DOCKERHUB_PASSWORD="YOUR_PRIVATE_ACCESS_TOKEN"
sudo docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASSWORD
sudo docker-compose up -d
```
You can use the following command to ensure that none of the containers have exited
```
sudo docker ps -a
```
7. Return to your browser and navigate back to the EC2 dashboard. In the left panel, scroll back down to **Target groups**. Click through each target group and verify that each has the registered instance showing as **healthy**. This may take a minute or two after starting the containers.
8. Once all target groups are showing your instance as healthy, you can navigate to your browser and enter the subdomain that you selected for the AIS Platform web app to get started!
---
# Source: https://docs.squared.ai/deployment-and-security/setup/ecs.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# AWS ECS
> Coming soon...
---
# Source: https://docs.squared.ai/deployment-and-security/setup/eks.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# AWS EKS (Kubernetes)
> Coming soon...
---
# Source: https://docs.squared.ai/activation/data-apps/visualizations/embed.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.squared.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Embed in Business Apps
> Learn how to embed Data Apps into tools like CRMs, support platforms, or internal web apps.
Once your Data App is configured and saved, you can embed it within internal or third-party business tools where your users workβsuch as CRMs, support platforms, or internal dashboards.
AI Squared supports multiple embedding options for flexibility across environments.
***
## Option 1: Embed via IFrame
1. Go to **Data Apps**.
2. Select the Data App you want to embed.
3. Click on **Embed Options** > **IFrame**.
4. Copy the generated `