You can record chats in the backend or directly on the frontend if it's easier for you.
## Setup the SDK
Similarly, you can nest multiple levels of agents together, and report other run types such as `tool` and `embed`.
Steps 2-5 could repeat multiple times.
Here's what that would look like in terms of events:
#### 1. The user asks a question
Capture the user message using a `thread.chat` event and the `message` field.
Note that we must pass a `parentRunId` here, which is the unique identifier of the current thread. Thread runs are opened and closed automatically, you don't need to explicitly start or end them.
For a `chat` event, a different `parentRunId` means a different conversation thread with the user.
```json theme={null}
{
"type": "thread",
"event": "chat",
"runId": "chat-run-id",
"parentRunId": "thread-run-id",
"timestamp": "2024-07-16T00:00:00Z",
"message": { "role": "user", "content": "What's the weather in Boston?" }
}
```
#### 2. Invoke an Agent to handle the request.
While this is optional (as we already have a parent `chat` run), it's good practice to open an `agent` run to encapsulate the agent's logic.
This also allows us to see the isolated's agent execution in the Traces tab of the Lunary UI.
```json theme={null}
{
"type": "agent",
"event": "start",
"runId": "agent-run-id",
"parentRunId": "chat-run-id",
"name": "my-super-agent",
"timestamp": "2024-07-16T00:00:01Z",
"input": "What's the weather in Boston?"
}
```
#### 3. The agent makes an LLM call and asks a tool to be executed.
```json theme={null}
{
"type": "llm",
"event": "start",
"runId": "llm-run-id",
"name": "gpt-4o",
"parentRunId": "agent-run-id",
"timestamp": "2024-07-16T00:00:02Z",
"params": {
"tools": [
{
"type": "function",
"function": {
"name": "get_weather_forecast",
"description": "Get the weather forecast for a specific location.",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city for which to get the weather forecast."
}
},
"required": ["city"]
}
}
}
]
},
"input": [{ "role": "user", "content": "What's the weather in Boston?" }]
}
```
Assuming the LLM would respond with:
```json theme={null}
{
"type": "llm",
"event": "end",
"runId": "llm-run-id",
"parentRunId": "agent-run-id",
"timestamp": "2024-07-16T00:00:05Z",
"output": {
"role": "assistant",
"content": "I can get the weather forecast for you. Please wait a moment.",
"tool_calls": [
{
"id": "call_id",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"city\": \"Boston\"}"
}
}
]
}
}
```
#### 3. We execute the tool.
```json theme={null}
{
"type": "tool",
"event": "start",
"runId": "tool-run-id",
"parentRunId": "agent-run-id",
"timestamp": "2024-07-16T00:00:06Z",
"name": "get_weather_forecast",
"input": {
"city": "Boston"
}
}
```
At this point we would call our weather API, and then respond with the output:
```json theme={null}
{
"type": "tool",
"event": "end",
"runId": "tool-run-id",
"parentRunId": "agent-run-id",
"timestamp": "2024-07-16T00:00:10Z",
"output": {
"temperature": 72,
"weather": "sunny"
}
}
```
#### 4. Another LLM call is made with the tool's output.
```json theme={null}
{
"type": "llm",
"event": "start",
"runId": "llm-run-id-2",
"parentRunId": "agent-run-id",
"timestamp": "2024-07-16T00:00:11Z",
"name": "gpt-4o",
"input": [
{ "role": "user", "content": "What's the weather in Boston?" },
{
"role": "assistant",
"content": "I can get the weather forecast for you. Please wait a moment.",
"tool_calls": [
{
"id": "call_id",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"city\": \"Boston\"}"
}
}
]
},
{
"role": "tool",
"content": "{\"temperature\": 72, \"weather\": \"sunny\"}"
}
]
}
```
Let's assume the LLM would respond with:
```json theme={null}
{
"type": "llm",
"event": "end",
"runId": "llm-run-id-2",
"timestamp": "2024-07-16T00:00:15Z",
"parentRunId": "agent-run-id",
"output": {
"role": "assistant",
"content": "The weather in Boston is sunny with a temperature of 72 degrees."
}
}
```
#### 5. The final answer is returned to the user.
We can first mark the agent run as completed.
```json theme={null}
{
"type": "agent",
"event": "end",
"runId": "agent-run-id",
"timestamp": "2024-07-16T00:00:20Z",
"output": "The weather in Boston is sunny with a temperature of 72 degrees."
}
```
Then reply the final answer to the user (note that the `runId` & `parentRunId` here is the same as the previous `chat` run), as 1 ID is used per user->assistant interaction.
```json theme={null}
{
"type": "thread",
"event": "chat",
"runId": "chat-run-id",
"parentRunId": "thread-run-id",
"timestamp": "2024-07-16T00:00:25Z",
"message": {
"role": "assistant",
"content": "The weather in Boston is sunny with a temperature of 72 degrees."
}
}
```
As you can see, in the context of:
* chat messages, the user message is passed with the `message` field
* llm calls, `input` is the prompt and `output` is the llm's response
* tools, `input` is the arguments and the `output` is the tool's result
This is how it would look in the Lunary UI, under the Threads section:
And clicking on "View trace" shows us:
### Bonus: Reporting User Feedback
If you have feedback from the user, you can attach it to the `chat` run using a `feedback` event and the `feedback` field.
```json theme={null}
{
"type": "chat",
"event": "feedback",
"runId": "chat-run-id",
"feedback": {
"comment": "Great response!",
"thumb": "up"
}
}
```
The feedback will now cascade down to all the child runs within the UI, for easy filtering of positive and negative runs.
---
# Source: https://docs.lunary.ai/docs/api/checklists/delete-a-checklist.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a checklist
> Delete a specific checklist by its ID.
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/checklists/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/checklists/{id}:
delete:
tags:
- Checklists
summary: Delete a checklist
description: |
Delete a specific checklist by its ID.
parameters:
- in: path
name: id
required: true
schema:
type: string
format: uuid
description: The ID of the checklist to delete
responses:
'200':
description: Successful deletion
'404':
description: Checklist not found
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/evals/delete-a-criterion.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a criterion
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/evals/criteria/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/evals/criteria/{id}:
delete:
tags:
- Evals
- Criteria
summary: Delete a criterion
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Criterion deleted successfully
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/datasets/delete-a-dataset.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a dataset
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets/{id}:
delete:
tags:
- Datasets
summary: Delete a dataset
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Dataset deleted successfully
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/models/delete-a-model.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a model
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/models/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/models/{id}:
delete:
tags:
- Models
summary: Delete a model
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Successful deletion
````
---
# Source: https://docs.lunary.ai/docs/api/playground-endpoints/delete-a-playground-endpoint.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a playground endpoint
> Delete a playground endpoint
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/playground-endpoints/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/playground-endpoints/{id}:
delete:
tags:
- Playground Endpoints
summary: Delete a playground endpoint
description: Delete a playground endpoint
parameters:
- in: path
name: id
required: true
schema:
type: string
format: uuid
responses:
'204':
description: Endpoint deleted successfully
'404':
description: Endpoint not found
````
---
# Source: https://docs.lunary.ai/docs/api/datasets/delete-a-prompt-variation.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a prompt variation
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets/variations/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets/variations/{id}:
delete:
tags:
- Datasets
- Prompts
- Variations
summary: Delete a prompt variation
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Prompt variation deleted successfully
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/datasets/delete-a-prompt.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a prompt
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets/prompts/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets/prompts/{id}:
delete:
tags:
- Datasets
- Prompts
summary: Delete a prompt
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Prompt deleted successfully
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/runs/delete-a-run.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a run
> Delete a specific run by its ID. This action is irreversible.
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/runs/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/runs/{id}:
delete:
tags:
- Runs
summary: Delete a run
description: Delete a specific run by its ID. This action is irreversible.
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'204':
description: Run successfully deleted
'403':
description: Forbidden - User doesn't have permission to delete runs
'404':
description: Run not found
````
---
# Source: https://docs.lunary.ai/docs/api/users/delete-a-specific-user.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a specific user
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/external-users/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/external-users/{id}:
delete:
tags:
- Users
summary: Delete a specific user
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'204':
description: Successful deletion
````
---
# Source: https://docs.lunary.ai/docs/api/templates/delete-a-template.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a template
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/templates/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/templates/{id}:
delete:
tags:
- Templates
summary: Delete a template
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'204':
description: Successful deletion
````
---
# Source: https://docs.lunary.ai/docs/api/views/delete-a-view.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete a view
> Deletes a specific view by its ID.
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/views/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/views/{id}:
delete:
tags:
- Views
summary: Delete a view
description: Deletes a specific view by its ID.
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Successful deletion
content:
application/json:
example:
message: View successfully deleted
````
---
# Source: https://docs.lunary.ai/docs/api/evals/delete-an-evaluation.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete an evaluation
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/evals/{id}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/evals/{id}:
delete:
tags:
- Evals
summary: Delete an evaluation
parameters:
- in: path
name: id
required: true
schema:
type: string
responses:
'200':
description: Evaluation deleted successfully
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/datasets-v2/delete-dataset-v2-item.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete dataset v2 item
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets-v2/{datasetId}/items/{itemId}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets-v2/{datasetId}/items/{itemId}:
delete:
tags:
- Datasets v2
summary: Delete dataset v2 item
parameters:
- in: path
name: datasetId
required: true
schema:
type: string
format: uuid
- in: path
name: itemId
required: true
schema:
type: string
format: uuid
responses:
'204':
description: Dataset item deleted
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/datasets-v2/delete-dataset-v2.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Delete dataset v2
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets-v2/{datasetId}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets-v2/{datasetId}:
delete:
tags:
- Datasets v2
summary: Delete dataset v2
parameters:
- in: path
name: datasetId
required: true
schema:
type: string
format: uuid
responses:
'204':
description: Dataset deleted
security:
- BearerAuth: []
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/api/datasets-v2/detach-an-evaluator-from-a-dataset.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Detach an evaluator from a dataset
## OpenAPI
````yaml https://api.lunary.ai/v1/openapi delete /v1/datasets-v2/{datasetId}/evaluators/{slot}
openapi: 3.0.0
info:
title: Lunary API
version: 1.0.0
servers:
- url: https://api.lunary.ai
security: []
tags: []
paths:
/v1/datasets-v2/{datasetId}/evaluators/{slot}:
delete:
tags:
- Datasets v2
summary: Detach an evaluator from a dataset
parameters:
- in: path
name: datasetId
required: true
schema:
type: string
format: uuid
- in: path
name: slot
required: true
schema:
type: integer
minimum: 1
maximum: 5
responses:
'200':
description: Updated dataset with evaluator column removed
content:
application/json:
schema:
$ref: '#/components/schemas/DatasetV2WithItems'
security:
- BearerAuth: []
components:
schemas:
DatasetV2WithItems:
allOf:
- $ref: '#/components/schemas/DatasetV2'
- type: object
properties:
items:
type: array
items:
$ref: '#/components/schemas/DatasetV2Item'
DatasetV2:
type: object
properties:
id:
type: string
format: uuid
projectId:
type: string
format: uuid
ownerId:
type: string
format: uuid
nullable: true
ownerName:
type: string
nullable: true
ownerEmail:
type: string
nullable: true
name:
type: string
description:
type: string
nullable: true
createdAt:
type: string
format: date-time
updatedAt:
type: string
format: date-time
itemCount:
type: integer
currentVersionId:
type: string
format: uuid
nullable: true
currentVersionNumber:
type: integer
currentVersionCreatedAt:
type: string
format: date-time
nullable: true
currentVersionCreatedBy:
type: string
format: uuid
nullable: true
currentVersionRestoredFromVersionId:
type: string
format: uuid
nullable: true
evaluatorSlot1Id:
type: string
format: uuid
nullable: true
evaluatorSlot2Id:
type: string
format: uuid
nullable: true
evaluatorSlot3Id:
type: string
format: uuid
nullable: true
evaluatorSlot4Id:
type: string
format: uuid
nullable: true
evaluatorSlot5Id:
type: string
format: uuid
nullable: true
DatasetV2Item:
type: object
properties:
id:
type: string
format: uuid
datasetId:
type: string
format: uuid
input:
type: string
groundTruth:
type: string
nullable: true
output:
type: string
nullable: true
evaluatorResult1:
type: object
nullable: true
evaluatorResult2:
type: object
nullable: true
evaluatorResult3:
type: object
nullable: true
evaluatorResult4:
type: object
nullable: true
evaluatorResult5:
type: object
nullable: true
createdAt:
type: string
format: date-time
updatedAt:
type: string
format: date-time
securitySchemes:
BearerAuth:
type: http
scheme: bearer
````
---
# Source: https://docs.lunary.ai/docs/more/self-hosting/docker-compose.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Docker Compose
Lunary is designed to be simple to self-host using Docker Compose, which makes managing all components easier.
The following metrics are currently automatically captured:
| Metric | Description |
| -------------- | ---------------------------------------------------- |
| 💰 **Costs** | Costs incurred by your LLM models |
| 📊 **Usage** | Number of LLM calls made & tokens used |
| ⏱️ **Latency** | Average latency of LLM calls and agents |
| ❗ **Errors** | Number of errors encountered by LLM calls and agents |
| 👥 **Users** | Usage over time of your top users |
## Logs
Lunary allows you to log and inspect your LLM requests and responses.
Logging is automatic as soon as you integrate our SDK.
## Tracing
Tracing is helpful to debug more complex AI agents and troubleshoot issues.
The easiest way to get started with traces is to use our utility wrappers to automatically track your agents and tools.
### Wrapping Agents
By wrapping an agent, input, outputs and errors are automatically tracked.
Any query ran inside the agent will be tied to the agent.
## Variables and Dynamic Content
The Playground supports dynamic variables in your prompts:
1. Define variables using double curly braces: `{{variable_name}}`
2. Enter test values in the Variables section
3. See how different variable values affect the output
## Saving and Collaboration
The Playground supports team collaboration with built-in versioning and role-based access control:
### Creating Draft Versions
1. Click "Save as Draft" to save your experiments without affecting production
2. Add version notes to document your changes and findings
3. Share the draft with team members for review and feedback
### Collaboration Features
* **Draft Sharing**: Team members can view and test your draft prompts
* **Notepad**: Leave feedback on specific prompt versions via the notepad
* **Role-Based Access**:
* Developers and prompt engineers can create and edit drafts
* Only authorized users (with deployment permissions) can promote drafts to production
* Viewers can test prompts but cannot modify them
## Testing with Custom Endpoints
One of the most powerful features of the Prompt Playground is the ability to test prompts against your own custom API endpoints. This is particularly useful for:
* **RAG (Retrieval-Augmented Generation) systems**
* **Custom AI applications** with proprietary logic
* **API wrappers** that combine multiple AI services
* **Complex systems** that include more components than just an LLM
### Setting Up Custom Endpoints
To configure a custom endpoint:
1. Toggle the **Run Mode** from "Model Provider" to "Custom Endpoint"
2. Click "Configure Endpoints" to set up your API endpoints
### Endpoint Configuration
When creating an endpoint, you'll need to provide:
* **Name**: A descriptive name for your endpoint
* **URL**: The full URL of your API endpoint
* **Authentication**: Choose from:
* Bearer Token (for OAuth/JWT)
* API Key (with custom header name)
* Basic Authentication
* No authentication
* **Custom Headers**: Additional headers to include in requests
* **Default Payload**: Base payload that will be merged with prompt data
### Request Format
When you run a prompt against a custom endpoint, Lunary sends an HTTP POST request with the following JSON payload:
```json theme={null}
{
"messages": [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "What is the weather like?"}
],
"model_params": {
"temperature": 0.7,
"max_tokens": 1000,
"model": "gpt-4"
},
"variables": {
"location": "San Francisco",
"user_id": "12345"
}
// custom payload data will be merged here
"custom_data": {
"example_key": "example_value"
}
}
```
Your endpoint should process this request and return a response. Lunary supports various response formats:
* Simple text responses
* OpenAI-compatible message arrays
* Custom JSON structures
### Use Case Examples
#### RAG System Integration
Test how your prompts work with your retrieval-augmented generation system:
```javascript theme={null}
// Example RAG endpoint that enriches prompts with context
app.post('/api/rag-chat', async (req, res) => {
const { content, variables } = req.body;
// Extract the user's query
const userQuery = content[content.length - 1].content;
// Search your knowledge base
const relevantDocs = await vectorDB.search(userQuery, {
filter: { user_id: variables.user_id },
limit: 5
});
// Augment the prompt with retrieved context
const augmentedContent = [
...content.slice(0, -1),
{
role: "system",
content: `Relevant context:\n${relevantDocs.map(d => d.text).join('\n\n')}`
},
content[content.length - 1]
];
// Generate response with your LLM
const response = await llm.generate({
...req.body,
content: augmentedContent
});
res.json({ content: response.text });
});
```
#### Custom Agent Testing
Test prompts against AI agents with tool access or custom logic:
```python theme={null}
# Example agent endpoint with tool usage
@app.post("/api/agent")
async def agent_endpoint(request: dict):
prompt = request["content"]
variables = request["variables"]
# Parse intent and determine required tools
intent = parse_intent(prompt[-1]["content"])
if intent.requires_search:
search_results = await web_search(intent.query)
context = format_search_results(search_results)
prompt.append({"role": "system", "content": f"Search results: {context}"})
if intent.requires_calculation:
calc_result = await calculator(intent.expression)
prompt.append({"role": "system", "content": f"Calculation: {calc_result}"})
# Generate final response
response = await generate_response(prompt, variables)
return {"content": response, "tools_used": intent.tools}
```
---
# Source: https://docs.lunary.ai/docs/features/prompts.md
> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lunary.ai/llms.txt
> Use this file to discover all available pages before exploring further.
# Prompt Templates
Prompt templates are a way to store, version and collaborate on prompts.
Developers use prompt templates to:
* clean up their source code
* make edits to prompts without re-deploying code
* collaborate with non-technical teammates
* A/B test prompts
## Creating a template
You can create a prompt template by clicking on the "Create prompt template" button in the Prompts section of the dashboard.
## Usage with OpenAI
You can use templates seamlessly with OpenAI's API with our SDKs.
This will make sure the tracking of the prompt is done automatically.
The strict minimum to enable user tracking is to report a `userId`, however you can report any property you'd like such as an email or name using an `userProps` object.
## Tracking users with the backend SDK