# Promptlayer > ## Documentation Index --- # Source: https://docs.promptlayer.com/why-promptlayer/ab-releases.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt > Use this file to discover all available pages before exploring further. # A/B Testing A/B Releases is a powerful feature that allows you to test different versions of your prompts in production, safely roll out updates, and segment users. 🚀 For technical details and usage instructions, check out the [Dynamic Release Labels](/features/prompt-registry/dynamic-release-labels) page. ## Overview A/B Releases work by dynamically overloading your release labels. You can split traffic between different prompt versions based on percentages or user segments. This lets you: * Test new prompt versions with a subset of users before a full rollout * Gradually release updates to minimize risk * Segment users to receive specific versions (e.g., beta users, internal employees) ## Use Cases ### Testing Prompt Updates Have a stable prompt version that's working well but want to test an update? Create an A/B Release! You can direct a small percentage of traffic (e.g., 20%) to the new version. If there are no issues after a week, you can slowly increase the percentage. This minimizes the risk of rolling out an update to all users at once. Dynamic Release Labels Diagram ### Gradual Rollouts Ready to roll out a new prompt version but want to minimize risk? Use A/B Releases to gradually ramp up traffic to the new version. Start with a 5% rollout, then increase to 10%, 25%, 50%, and eventually 100% as you gain confidence in the new version. This staged approach ensures a smooth transition for your users. ### User Segmentation Want to give certain users access to a dev version of your prompt? A/B Releases make this easy. Define user segments based on metadata (e.g., user ID, company) and specify which prompt version each segment should receive. This lets you test new versions with beta users or give internal employees access to dev versions. For example, you could create a segment for internal user IDs and configure their traffic split to be 50% dev version and 50% stable version. Alternatively, you could segment based on the user's subscription level, giving free users access to experimental prompt versions first before rolling them out to paying customers. This allows you to gather feedback and iterate on new features without affecting your premium user base. *** A/B Releases give you the power to experiment, safely roll out updates, and deliver targeted experiences. Try it out and take control of your prompt releases! 🎉 --- # Source: https://docs.promptlayer.com/reference/add-report-columns.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt > Use this file to discover all available pages before exploring further. # Add Column to Evaluation Pipeline > Adds a new evaluation step (column) to an existing evaluation pipeline. Columns execute sequentially from left to right and can reference data from previous columns. **Column Types Available:** - **Primary**: PROMPT_TEMPLATE, ENDPOINT, MCP, HUMAN, CODE_EXECUTION, CODING_AGENT, CONVERSATION_SIMULATOR, WORKFLOW - **Evaluation**: LLM_ASSERTION, AI_DATA_EXTRACTION, COMPARE, CONTAINS, REGEX, REGEX_EXTRACTION, COSINE_SIMILARITY, ABSOLUTE_NUMERIC_DISTANCE - **Helper**: JSON_PATH, XML_PATH, PARSE_VALUE, APPLY_DIFF, VARIABLE, ASSERT_VALID, COALESCE, COMBINE_COLUMNS, COUNT, MATH_OPERATOR, MIN_MAX See the full documentation for detailed configuration requirements for each column type. This endpoint adds evaluation steps (columns) to an existing evaluation pipeline. Columns execute sequentially from left to right, with each column able to reference outputs from previous columns. ## Important Notes * **Single Column Per Request**: This endpoint only allows adding one column at a time. To add multiple columns, make separate API calls for each. * **Column Order Matters**: Columns execute left to right. A column can only reference columns to its left. * **Unique Names Required**: Each column name must be unique within the pipeline. * **Dataset Columns Protected**: You cannot overwrite columns that come from the dataset. ## Scoring By default, only the last column in a pipeline is used for score calculation. To include multiple columns in the final score: * Set `is_part_of_score: true` on each column you want to include in the score * Columns must produce boolean or numeric values to be scored * When multiple columns are marked for scoring, the final score is the average of all included columns ## Column Types and Configuration For the complete list of supported column types and their detailed configuration options, see the [Node & Column Types](/features/evaluations/column-types) documentation. ## Batch Adding Columns Since columns must be added one at a time, here's a pattern for adding multiple columns: ```python theme={null} import requests columns = [ { "column_type": "PROMPT_TEMPLATE", "name": "Generate", "configuration": {...} }, { "column_type": "LLM_ASSERTION", "name": "Validate", "configuration": {...} } ] for column in columns: response = requests.post( "https://api.promptlayer.com/report-columns", headers={"X-API-KEY": "your_key"}, json={ "report_id": 456, **column } ) if response.status_code != 201: print(f"Failed: {column['name']}") break ``` ## Column Reference Syntax When configuring columns that reference other columns: * **Dataset columns**: Use exact column name from dataset (e.g., `"question"`) * **Previous columns**: Use the name you assigned (e.g., `"AI Response"`) * **Variable columns**: Reference by their name ## Error Handling The endpoint validates: 1. Column type is valid 2. Column name is unique within the pipeline 3. Configuration matches the column type schema 4. Referenced columns exist (for dependent columns) 5. User has permission to modify the pipeline Common errors: * `400`: Invalid configuration or duplicate column name * `403`: Cannot overwrite dataset columns or lacking permissions * `404`: Report not found or not accessible ## OpenAPI ````yaml POST /report-columns openapi: 3.1.0 info: title: FastAPI version: 0.1.0 servers: [] security: [] paths: /report-columns: post: tags: - reports summary: Add Column to Evaluation Pipeline description: >- Adds a new evaluation step (column) to an existing evaluation pipeline. Columns execute sequentially from left to right and can reference data from previous columns. **Column Types Available:** - **Primary**: PROMPT_TEMPLATE, ENDPOINT, MCP, HUMAN, CODE_EXECUTION, CODING_AGENT, CONVERSATION_SIMULATOR, WORKFLOW - **Evaluation**: LLM_ASSERTION, AI_DATA_EXTRACTION, COMPARE, CONTAINS, REGEX, REGEX_EXTRACTION, COSINE_SIMILARITY, ABSOLUTE_NUMERIC_DISTANCE - **Helper**: JSON_PATH, XML_PATH, PARSE_VALUE, APPLY_DIFF, VARIABLE, ASSERT_VALID, COALESCE, COMBINE_COLUMNS, COUNT, MATH_OPERATOR, MIN_MAX See the full documentation for detailed configuration requirements for each column type. operationId: addReportColumn parameters: - name: X-API-KEY in: header required: true schema: type: string description: API key to authorize the operation. Can also use JWT authentication. requestBody: required: true content: application/json: schema: type: object properties: report_id: type: integer description: The ID of the evaluation pipeline to add this column to. minimum: 1 column_type: type: string description: >- The type of evaluation or transformation this column performs. Must be one of the supported column types. enum: - ABSOLUTE_NUMERIC_DISTANCE - AI_DATA_EXTRACTION - ASSERT_VALID - CONVERSATION_SIMULATOR - COALESCE - CODE_EXECUTION - COMBINE_COLUMNS - COMPARE - CONTAINS - COSINE_SIMILARITY - COUNT - ENDPOINT - MCP - HUMAN - JSON_PATH - LLM_ASSERTION - MATH_OPERATOR - MIN_MAX - PARSE_VALUE - APPLY_DIFF - PROMPT_TEMPLATE - REGEX - REGEX_EXTRACTION - VARIABLE - XML_PATH - WORKFLOW - CODING_AGENT name: type: string description: >- Display name for this column. Must be unique within the pipeline. This name is used to reference the column in subsequent steps. minLength: 1 maxLength: 255 configuration: type: object description: >- Column-specific configuration. The schema varies based on column_type. See documentation for each type's requirements. additionalProperties: true position: type: integer description: >- Optional position for the column. If not specified, the column is added at the end. Cannot overwrite dataset columns. minimum: 0 nullable: true required: - report_id - column_type - name - configuration example: report_id: 456 column_type: PROMPT_TEMPLATE name: Generate Answer configuration: template: name: qa_template version_number: null prompt_template_variable_mappings: question: input_question engine: provider: openai model: gpt-4 parameters: temperature: 0.7 responses: '201': description: Column added successfully content: application/json: schema: type: object properties: success: type: boolean example: true report_column: type: object description: >- Details of the created column including its ID and configuration '400': description: >- Bad Request - Invalid column type, configuration validation failed, or column name already exists content: application/json: schema: type: object properties: success: type: boolean example: false message: type: string example: Report already has a column with that name '401': description: Unauthorized - Invalid or missing authentication '403': description: Forbidden - Cannot overwrite dataset columns or missing permissions content: application/json: schema: type: object properties: success: type: boolean example: false message: type: string example: You can not overwrite dataset columns '404': description: Not Found - Report not found or not accessible content: application/json: schema: type: object properties: success: type: boolean example: false message: type: string example: Report not found ```` --- # Source: https://docs.promptlayer.com/why-promptlayer/advanced-search.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt > Use this file to discover all available pages before exploring further. # Advanced Search PromptLayer advanced search capabilities allows you to find exactly what you want using tags, search queries, metadata, favorites, and score filtering. ## Using the Search Bar To start your search, enter the keywords you want to find into the search bar and click on the "Search" button. You can use freeform search to find any text within the PromptLayer. Advanced Search ## Advanced Search Filters #### Metadata Search Use the metadata search filter to search for specific metadata within the PromptLayer. You can search for user IDs, session IDs, tokens, error messages, status codes, and other metadata by entering the metadata field name and value into the search bar. PromptLayer allows you to attach multiple key value pairs as metadata to a request. In the dashboard, you can look up requests and analyze analytics using metadata. The method for adding metadata to a request can be found in our documentation [here](/features/prompt-history/tracking-metadata-and-request-ids.mdx). ```python Python theme={null} promptlayer_client.track.metadata( request_id=pl_request_id, metadata={ "user_id":"1abf2345f", "session_id": "2cef2345f", "error_message": "None" } ) ``` ```js JavaScript theme={null} promptLayerClient.track.metadata({ request_id:pl_request_id, metadata:{ "user_id":"1abf2345f", "session_id": "2cef2345f", "error_message": "None" } }) ``` The metadata search filter works by clicking on "Key" in the advanced search filter, selecting the desired metadata key (in this case, user\_id), selecting the relevant value under "Value", and clicking "Add filter". #### Score Filtering Use the score filtering feature to search for prompts based on their scores. You can filter prompts by selecting the score range in the "Score" dropdown. Score filtering is a powerful tool for analyzing the performance of your prompts. You can use it to identify high-performing prompts, or to find prompts that may need improvement. Below is an example of how you can score a request programmatically. It can also be done through the dashboard as shown [here](/features/prompt-history/scoring-requests). ```python Python theme={null} promptlayer_client.track.score( request_id=pl_request_id, score_name="summarization", # optional score name score=100 ) ``` ```js JavaScript theme={null} promptLayerClient.track.score({ request_id: pl_request_id, score: 100 }) ``` #### Tags Search Use the tags search filter to search for specific tags within the PromptLayer. Tags are used to group product features, prod/dev versions, and other categories. You can search for tags by selecting them in the "Tags" dropdown. Tagging a request is easy. Read more about it [here](/features/prompt-history/organizing-with-tags). ```python Python Native theme={null} openai.Completion.create( engine="text-ada-001", prompt="My name is", pl_tags=["mytag1", "mytag2"] ) ``` ```js JavaScript theme={null} openai.completions.create({ model:"text-ada-001", prompt:"My name is", pl_tags:["mytag1", "mytag2"] }) ``` ```python Python LangChain theme={null} from langchain.llms import PromptLayerOpenAI llm = PromptLayerOpenAI(pl_tags=["mytag1", "mytag2"]) resp = llm("tell me a joke") ``` #### Favorites By selecting the "favorite" tag, you can narrow by favorited requests. To favorite a request, click the star on the top right on the dashboard. Favorites --- # Source: https://docs.promptlayer.com/why-promptlayer/agents.md > ## Documentation Index > Fetch the complete documentation index at: https://docs.promptlayer.com/llms.txt > Use this file to discover all available pages before exploring further. # Agents