# Humanloop
> The Humanloop API allows you to interact with Humanloop and model providers programmatically.
---
# Source: https://humanloop.com/docs/v4/api.md
# Source: https://humanloop.com/docs/api.md
# API
The Humanloop API allows you to interact with Humanloop and model providers programmatically.
You can do this through HTTP requests from any language or via our official Python or TypeScript SDK.
First you need to install and initialize the SDK. If you have already done this, skip to the next section.
Open up your terminal and follow these steps:
1. Install the Humanloop SDK:
```python
pip install humanloop
```
```typescript
npm install humanloop
```
2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)).
```python
from humanloop import Humanloop
humanloop = Humanloop(api_key="")
# Check that the authentication was successful
print(humanloop.prompts.list())
```
```typescript
import { HumanloopClient, Humanloop } from "humanloop";
const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" });
// Check that the authentication was successful
console.log(await humanloop.prompts.list());
```
Guides and further details about key concepts can be found in [our docs](/docs/getting-started/overview).
---
# Source: https://humanloop.com/docs/sdk/decorators.md
# Decorators Overview
> Overview of the decorator system in the Humanloop SDK
## Introduction
Humanloop provides a set of decorators that help you instrument your AI features with minimal code changes. These decorators automatically create and manage Logs on the Humanloop platform, enabling monitoring, evaluation, and improvement of your AI applications.
| Decorator | Purpose | Creates | Documentation |
|-----------|---------|---------|---------------|
| `prompt` | Instrument LLM provider calls | Prompt Logs | [Learn more →](/docs/v5/sdk/decorators/prompt) |
| `tool` | Define function calling tools | Tool Logs | [Learn more →](/docs/v5/sdk/decorators/tool) |
| `flow` | Trace multi-step AI features | Flow Log with traces | [Learn more →](/docs/v5/sdk/decorators/flow) |
## Common Patterns
All decorators share these common characteristics:
- **Path-based organization**: Each decorator requires a `path` parameter that determines where the File and its Logs are stored in your Humanloop workspace.
- **Automatic versioning**: Changes to the decorated function or its parameters create new versions of the File.
- **Error handling**: Errors are caught and logged, making debugging easier.
- **Minimal code changes**: Decorate existing code and adopt the Humanloop SDK gradually.
---
# Source: https://humanloop.com/docs/introduction/errors.md
# Errors
> This page provides a list of the error codes and messages you may encounter when using the Humanloop API.
### HTTP error codes
Our API will return one of the following HTTP error codes in the event of an issue:
Your request was improperly formatted or presented.
Your API key is incorrect or missing, or your user does not have the rights to access the relevant resource.
The requested resource could not be located.
Modifying the resource would leave it in an illegal state.
Your request was properly formatted but contained invalid instructions or did not match the fields required by the endpoint.
You've exceeded the maximum allowed number of requests in a given time period.
An unexpected issue occurred on the server.
The service is temporarily overloaded and you should try again.
## Error details
Our `prompt/call` endpoint acts as a unified interface across all popular model providers. The error returned by this endpoint may be raised by the model provider's system. Details of the error are returned in the `detail` object of the response.
```json
{
"type": "unprocessable_entity_error",
"message": "This model's maximum context length is 4097 tokens. However, you requested 10000012 tokens (12 in the messages, 10000000 in the completion). Please reduce the length of the messages or completion.",
"code": 422,
"origin": "OpenAI"
}
```
---
# Source: https://humanloop.com/docs/sdk/decorators/flow.md
# Flow Decorator
> Technical reference for the Flow decorator in the Humanloop SDK
## Overview
The Flow decorator creates and manages traces for your AI feature. When applied to a function, it:
- Creates a new trace on function invocation.
- Adds all Humanloop logging calls made inside the function to the trace.
- Completes the trace when the function exits.
On Humanloop, a trace is the collection of Logs associated with a Flow Log.
## Usage
The `flow` decorator will trace all downstream Humanloop logs, whether they are created by other decorators or SDK calls.
### Tracing Decorators
```python maxLines=50 wrapLines title="Python"
@hl_client.prompt(path="MyFeature/Call LLM"):
def call_llm(messages: List[ChatMessage]):
return openai.chat.completions.create(
model="gpt-4o-mini",
messages=messages
).choices[0].message.content
@hl_client.flow(path="MyFeature/Process")
def process_input(inputs: list[str]) -> list[str]:
# Logs created by the Prompt decorator are added to the trace
return [
call_llm([{"role": "user", "content": text}])
for text in inputs
]
```
```typescript maxLines=50 wrapLines title="TypeScript"
const callLLM = hlClient.prompt({
path: "MyFeature/Call LLM",
callable: async (messages: ChatMessage[]): Promise => {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages
});
return response.choices[0].message.content;
}
});
const processInput = hlClient.flow({
path: "MyFeature/Process",
callable: async (inputs: string[]): Promise => {
// Logs created by the Prompt decorator are added to the trace
return inputs.map(async (text) => await callLLM([
{"role": "user", "content": text}
]));
});
```
### Tracing SDK Calls
Logs created through the Humanloop SDK are added to the trace.
```python maxLines=50 title="Python" wrapLines
@hl_client.flow(path="MyFeature/Process")
def process_input(text: str) -> str:
# Created Log is added to the trace
llm_output = hl_client.prompts.call(
path="MyFeature/Transform",
messages=[{"role": "user", "content": text}]
).logs[0].output_message.content
transformed_output = transform(llm_output)
# Created Log is added to the trace
hl_client.tools.log(
path="MyFeature/Transform",
tool={function: TRANSFORM_JSON_SCHEMA},
inputs={"text": text},
output=transformed_output
)
return transformed_output
```
```typescript maxLines=50
const processInput = hlClient.flow({
path: "MyFeature/Process",
callable: async (text: string): Promise => {
// Created Log is added to the trace
const llmOutput = (
await hlClient.prompts.call({
path: "MyFeature/Transform",
messages: [{ role: "user", content: text }],
})
).logs[0].outputMessage.content;
const transformedOutput = transform(llmOutput);
// Created Log is added to the trace
await hlClient.tools.log({
path: "MyFeature/Transform",
tool: { function: TRANSFORM_JSON_SCHEMA },
inputs: { text },
output: transformedOutput,
});
return transformedOutput;
},
});
```
## Behavior
The decorated function creates a Flow Log when called. All Logs created inside the decorated function are added to its trace.
The Flow Log's fields are populated as follows:
| Field | Type | Description |
| ---------------- | ----------- | -------------------------------------------------------------------- |
| `inputs` | object | Function arguments that aren't ChatMessage arrays |
| `messages` | array | ChatMessage arrays passed as arguments |
| `output_message` | ChatMessage | Return value if it's a ChatMessage-like object |
| `output` | string | Stringified return value if not a ChatMessage-like object |
| `error` | string | Error message if function throws or return value can't be serialized |
If the decorated function returns a ChatMessage object, the `output_message` field is populated. Otherwise, the `output` field is populated with the stringified return value.
The decorated function creates a Flow Log when called. All Logs created inside the decorated function are added to its trace.
The Flow Log's fields are populated as follows:
| Field | Type | Description |
| --------------- | ----------- | -------------------------------------------------------------------- |
| `inputs` | object | Function arguments that aren't ChatMessage arrays |
| `messages` | array | ChatMessage arrays passed as arguments |
| `outputMessage` | ChatMessage | Return value if it's a ChatMessage-like object |
| `output` | string | Stringified return value if not a ChatMessage-like object |
| `error` | string | Error message if function throws or return value can't be serialized |
If the decorated function returns a ChatMessage object, the `outputMessage` field is populated. Otherwise, the `output` field is populated with the stringified return value.
## Definition
```python
@hl_client.flow(
# Required: path on Humanloop workspace for the Flow
path: str,
# Optional: metadata for versioning the Flow
attributes: dict[str, Any] = None
)
def function(*args, **kwargs): ...
```
The decorator will preserve the function's signature.
```typescript
hlClient.flow({
// Required: path on Humanloop workspace for the Flow
path: string,
// Required: decorated function
callable: I extends Record &
{ messages: ChatMessage[] } ?
(inputs: I) => O :
() => O;
// Optional: metadata for versioning the Flow
attributes?: Record;
}) => Promise
```
The function returned by the decorator is async and preserves the signature of `callable`.
Callable's `inputs` must extend `Record`. If a `messages` field is present in the `inputs`, it must have the `ChatMessage[]` type.
The decorated function will not wrap the return value in a second Promise if the `callable` is also asynchronous.
The decorator accepts the following parameters:
| Parameter | Type | Required | Description |
| ------------ | ------ | -------- | ---------------------------------------- |
| `path` | string | Yes | Path on Humanloop workspace for the Flow |
| `attributes` | object | No | Key-value object for versioning the Flow |
## SDK Interactions
- It's not possible to call `flows.log()` inside a decorated function. This will raise a [`HumanloopRuntimeError`](#error-handling)
- To create nested traces, call another flow-decorated function.
- Passing `trace_parent_id` argument to an SDK logging call inside the decorated function is ignored and emits a warning; the Log is added to the trace of the decorated function.
- It's not possible to call `flows.log()` inside a decorated function. This will raise a [`HumanloopRuntimeError`](#error-handling)
- To create nested traces, call another flow-decorated function.
- Passing `traceParentId` argument to an SDK logging call inside the decorated function is ignored and emits a warning; the Log is added to the trace of the decorated function.
## Error Handling
- If user-written code (e.g. in code Evaluators) raises an exception, the relevant Log's `error` field is populated with the exception message and the decorated function returns `None`.
- `HumanloopRuntimeError` exceptions indicate incorrect decorator or SDK usage and are re-raised instead of being logged under `error`.
- If user-written code (e.g. in code Evaluators) throws an exception, the relevant Log's `error` field is populated with the exception message and the decorated function returns `undefined`.
- `HumanloopRuntimeError` exceptions indicate incorrect decorator or SDK usage and are re-thrown instead of being logged under `error`.
## Related Documentation
A explanation of Flows and their role in the Humanloop platform is found in our [Flows](/docs/v5/explanation/flows) documentation.
---
# Source: https://humanloop.com/docs/introduction/overview.md
# Overview
> Learn how to integrate Humanloop into your applications using our Python and TypeScript SDKs or REST API.
The Humanloop platform can be accessed through the [API](/docs/v5/api) or through our Python and TypeScript SDKs.
### Usage Examples
```shell title="Installation"
npm install humanloop
```
```typescript title="Example usage"
import { HumanloopClient } from "humanloop";
const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" });
// Check that the authentication was successful
console.log(await humanloop.prompts.list());
```
```shell title="Installation"
pip install humanloop
```
```python title="Example usage"
from humanloop import Humanloop
hl = Humanloop(api_key="")
# Check that the authentication was successful
print(hl.prompts.list())
```
---
# Source: https://humanloop.com/docs/sdk/decorators/prompt.md
# Prompt Decorator
> Technical reference for the Prompt decorator in the Humanloop SDK
## Overview
The Prompt decorator automatically instruments LLM provider calls and creates Prompt Logs on Humanloop. When applied to a function, it:
- Creates a new Log for each LLM provider call made within the decorated function.
- Versions the Prompt using hyperparameters of the provider call.
### Decorator Definition
```python
@hl_client.prompt(
# Required: path on Humanloop workspace for the Prompt
path: str
)
def function(*args, **kwargs): ...
```
The decorated function will have the same signature as the original function.
```typescript
hlClient.prompt({
// Required: path on Humanloop workspace for the Prompt
path: string,
// Required: decorated function
callable: I extends Record &
{ messages?: ChatMessage[] } ?
(args: I) => O :
() => O;
}) => Promise
```
The decorated function is always async and has the same signature as the `callable` argument.
Callable's `args` must extend `Record`. If a `messages` field is present in the `args`, it must have type `ChatMessage[]`.
The decorated function will not wrap the return value in a second Promise if the `callable` is also asynchronous.
You must pass the providers you want to auto-instrument to the HumanloopClient constructor. Otherwise, the decorated function will work, but no Logs will be created.
```typescript {6-7}
import { HumanloopClient } from "humanloop";
import { OpenAI } from "openai";
const hlClient = new HumanloopClient({
apiKey: process.env.HL_API_KEY,
// Pass the provider module here
instrumentProviders: { OpenAI }
})
// You can now use the prompt decorator
```
### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `path` | string | Yes | Path on Humanloop workspace for the Prompt |
### Usage
```python
@hl_client.prompt(path="MyFeature/Process")
def process_input(text: str) -> str:
return openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": text}]
).choices[0].message.content
```
```typescript
const processInput = hlClient.prompt({
path: "MyFeature/Process",
callable: async (text: string): Promise => {
return openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: text }]
}).choices[0].message.content;
}
});
```
## Behavior
### Versioning
The hyperparameters of the LLM provider call are used to version the Prompt.
If the configuration changes, new Logs will be created under the new version of the the same Prompt.
The following parameters are considered for versioning the Prompt:
| Parameter | Description |
|-----------|-------------|
| `model` | The LLM model identifier |
| `endpoint` | The API endpoint type |
| `provider` | The LLM provider (e.g., "openai", "anthropic") |
| `max_tokens` | Maximum tokens in completion |
| `temperature` | Sampling temperature |
| `top_p` | Nucleus sampling parameter |
| `presence_penalty` | Presence penalty for token selection |
| `frequency_penalty` | Frequency penalty for token selection |
### Log Creation
Each LLM provider call within the decorated function creates a Log with the following fields set:
| Field | Type | Description |
|-------|------|-------------|
| `inputs` | dict[str, Any] | Function arguments that aren't ChatMessage arrays |
| `messages` | ChatMessage[] | ChatMessage arrays passed to the LLM |
| `output_message` | ChatMessage | LLM response with role and content |
| `error` | string | Error message if the LLM call fails |
| `prompt_tokens` | int | Number of tokens in the prompt |
| `reasoning_tokens` | int | Number of tokens used in reasoning |
| `output_tokens` | int | Number of tokens in the completion |
| `finish_reason` | string | Reason the LLM stopped generating |
| `start_time` | datetime | When the LLM call started |
| `end_time` | datetime | When the LLM call completed |
| Field | Type | Description |
|-------|------|-------------|
| `inputs` | object | Function arguments that aren't ChatMessage arrays |
| `messages` | ChatMessage[] | ChatMessage arrays passed to the LLM |
| `output_message` | ChatMessage | LLM response with role and content |
| `error` | string | Error message if the LLM call fails |
| `prompt_tokens` | number | Number of tokens in the prompt |
| `reasoning_tokens` | number | Number of tokens used in reasoning |
| `output_tokens` | number | Number of tokens in the completion |
| `finish_reason` | string | Reason the LLM stopped generating |
| `start_time` | Date | When the LLM call started |
| `end_time` | Date | When the LLM call completed |
## Error Handling
- LLM provider errors are caught and logged in the Log's `error` field. However, `HumanloopRuntimeError` is not caught and will be re-raised: they indicate wrong SDK or decorator usage.
- The decorated function propagates exceptions from the LLM provider.
- LLM provider errors are caught and logged in the Log's `error` field. However, `HumanloopRuntimeError` is not caught and will be re-thrown: they indicate wrong SDK or decorator usage.
- The decorated function propagates exceptions from the LLM provider.
## Best Practices
1. Multiple Logs will be created if you make multiple calls inside the decorated function. To avoid confusion, avoid calls with different providers or hyperparameters, as this will create multiple versions of the Prompt.
2. Calling `prompts.log()` or `prompts.call()` inside the decorated function works normally, with no interaction with the decorator. However, it indicates a misuse of the decorator, as they are alternatives for achieving the same result.
3. If you want to switch between providers with ease, use [`prompts.call()`](/docs/v5/api/prompts/call) with a `provider` parameter instead of the decorator.
## Related Documentation
Humanloop Prompts are more than the string passed to the LLM provider. They encapsulate LLM hyperparameters, associations to available tools, and can be templated. For more details, refer to our [Prompts explanation](/docs/v5/explanation/prompts).
---
# Source: https://humanloop.com/docs/sdk/run-evaluation.md
# Run Evaluation
> Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations on Humanloop and use them to iteratively improve your applications.
The `evaluations.run()` function is a convenience function that allows you to trigger evaluations from code. It will create the evaluation, fetch the dataset, generate all the Logs and then run the evaluators on each log.
It supports evaluating arbitrary functions, Prompts stored on Humanloop, and Prompts defined in code.
## Parameters
You can see the source code for the `evaluations.run()` function in [Python](https://github.com/humanloop/humanloop-python/blob/master/src/humanloop/evals/run.py#L106) and [TypeScript](https://github.com/humanloop/humanloop-node/blob/master/src/evals/run.ts#L211).
Name of the evaluation to help identify it
Configuration for what is being evaluated. The evaluation will be stored on this File.
Path to the evaluated File (a [Prompt](/docs/explanation/prompts), [Flow](/docs/explanation/flows), [Tool](/docs/explanation/tools), [Evaluator](/docs/explanation/evaluators) etc.) on Humanloop. If the File does not exist on Humanloop, it will be created.
Example: `My Agent` will create a `flow` file on Humanloop.
`flow` (default), `prompt`, `tool`, `evaluator`
If the File does not exist on Humanloop, it will be created with this File type.
Pass in the details of the version of the File you want to evaluate.
For example, for a Flow you might pass in identifiers:
```json
{
"git_hash": "1234567890",
"identifier": "rag-with-pinecone"
}
```
Or for a Prompt you can pass in Prompt details and it will be called.
```json
{
"model": "gpt-4",
"template": [
{
"role": "user",
"content": "You are a helpful assistant on the topic of {{topic}}."
}
]
}
```
Function to evaluate (optional if the File is runnable on Humanloop like a Prompt).
It will be called using your Dataset `callable(**datapoint.inputs, messages=datapoint.messages)`. It should return a single string output.
List of evaluators to judge the generated output
Path to evaluator on Humanloop
The type of arguments the Evaluator expects - only required for local Evaluators
The type of return value the Evaluator produces - only required for local Evaluators
Function to evaluate (optional if the Evaluator is runnable on Humanloop).
It will be called using the generated output as follows: `callable(output)`.
It should return a single string output.
Optional function that logs the output judgment from your Evaluator to Humanloop. If provided, it will be called as:
`judgment = callable(log_dict); log = custom_logger(client, judgment)`. Inside the custom_logger, you can use the Humanloop `client` to log the judgment to Humanloop.
If not provided your function must return a single string and by default the code will be used to inform the version of the external Evaluator on Humanloop.
The threshold to check the evaluator result against
Dataset to evaluate against
Path to existing dataset on Humanloop. If the Dataset does not exist on Humanloop, it will be created.
The datapoints to map your function over to produce the outputs required by the evaluation. Optional - if not provided, the evaluation will be run over the datapoints stored on Humanloop.
## Return Type
Returns an `EvaluationStats` object containing:
- run_stats: Array of statistics for each run
- progress: Summary of evaluation progress
- report: Detailed evaluation report
- status: Current status of evaluation
# Examples
## 1. Evaluating an Arbitrary Flow Function
To evaluate an arbitrary workflow you can pass in the `callable` parameter to the `file` object.
```python
def my_flow_function(messages):
# Your custom logic here
return "Response based on messages"
evaluation = humanloop.evaluations.run(
name="Custom Flow Evaluation",
type="flow",
file={
"path": "Custom/Flow",
"callable": my_flow_function
},
evaluators=[
{"path": "Example Evaluators/AI/Semantic similarity"},
{"path": "Example Evaluators/Code/Latency"}
],
dataset={
"path": "Test/Dataset",
"datapoints": [
{
"messages": [
{"role": "user", "content": "Test question 1"}
]
}
]
}
)
```
```typescript
const myFlowFunction = (messages: Message[]): string => {
// Your custom logic here
return "Response based on messages";
};
const evaluation = await humanloop.evaluations.run({
name: "Custom Flow Evaluation",
file: {
path: "Custom/Flow",
type: "flow",
callable: myFlowFunction,
},
evaluators: [
{ path: "Example Evaluators/AI/Semantic similarity" },
{ path: "Example Evaluators/Code/Latency" },
],
dataset: {
path: "Test/Dataset",
datapoints: [
{
messages: [{ role: "user", content: "Test question 1" }],
},
],
},
});
```
## 2. Evaluating a Prompt on Humanloop
To evaluate a Prompt stored on Humanloop you simply supply a `path` to the Prompt and a list of Evaluators.
```python
evaluation = humanloop.evaluations.run(
name="Existing Prompt Evaluation",
file={
"path": "Existing/Prompt",
},
evaluators=[
{"path": "Example Evaluators/AI/Semantic similarity"},
{"path": "Example Evaluators/Code/Cost"}
],
dataset={
"path": "Existing/Dataset"
}
)
```
```typescript
const evaluation = await humanloop.evaluations.run({
name: "Existing Prompt Evaluation",
file: {
path: "Existing/Prompt",
},
evaluators: [
{ path: "Example Evaluators/AI/Semantic similarity" },
{ path: "Example Evaluators/Code/Cost" },
],
dataset: {
path: "Existing/Dataset",
},
});
```
## 3. Evaluating a Prompt in Code
To evaluate a Prompt defined in code you can pass in the `model`, `template` and other Prompt parameters to the `file`'s `version` object.
```python
evaluation = humanloop.evaluations.run(
name="Code Prompt Evaluation",
file={
"path": "Code/Prompt",
"version": {
"model": "gpt-4",
"template": [
{
"role": "system",
"content": "You are a helpful assistant on the topic of {{topic}}."
}
]
},
},
evaluators=[
{"path": "Example Evaluators/AI/Semantic similarity"},
{"path": "Example Evaluators/Code/Latency"}
],
dataset={
"datapoints": [
{
"inputs": { "topic": "machine learning" },
"messages": [ {"role": "user", "content": "What is machine learning?"} ],
"target": { "output": "Machine learning is a subset of artificial intelligence..." }
}
]
}
)
```
```typescript
const evaluation = await humanloop.evaluations.run({
name: "Code Prompt Evaluation",
file: {
path: "Code/Prompt",
model: "gpt-4",
template: [
{
role: "system",
content: "You are a helpful assistant on the topic of {{topic}}.",
},
],
},
evaluators: [
{ path: "Example Evaluators/AI/Semantic similarity" },
{ path: "Example Evaluators/Code/Latency" },
],
dataset: {
datapoints: [
{
inputs: { topic: "machine learning" },
messages: [{ role: "user", content: "What is machine learning?" }],
target: {
output: "Machine learning is a subset of artificial intelligence...",
},
},
],
},
});
```
Each example demonstrates a different way to use the `evaluation.run` function. The function returns evaluation statistics that can be used to understand the performance of your LLM application according to the specified evaluators.
You can view the results of your evaluation in the Humanloop UI by navigating to the specified file path, or by checking the evaluation stats programmatically using the returned object's `report` field.
---
# Source: https://humanloop.com/docs/v4/sdk.md
# SDKs
> Learn how to integrate Humanloop into your applications using our Python and TypeScript SDKs or REST API.
The Humanloop platform can be accessed through the API or through our Python and TypeScript SDKs.
### Usage Examples
```shell title="Installation"
pip install humanloop
```
```python title="Example usage"
from humanloop import Humanloop
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
)
chat_response = humanloop.chat(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "Jeff Dean",
},
stream=False,
)
print(chat_response)
```
```shell title="Installation"
npm i humanloop
```
```typescript title="Example usage"
import { Humanloop } from "humanloop";
const humanloop = new Humanloop({
apiKey: "YOUR_HUMANLOOP_API_KEY",
openaiApiKey: "YOUR_OPENAI_API_KEY",
});
const chatResponse = await humanloop.chat({
project: "sdk-example",
messages: [
{
role: "user",
content: "Write me a song",
},
],
model_config: {
model: "gpt-4",
temperature: 1,
},
});
console.log(chatResponse);
```
---
# Source: https://humanloop.com/docs/sdk/decorators/tool.md
# Tool Decorator
> Technical reference for the Tool decorator in the Humanloop SDK
## Overview
The Tool decorator helps you define [Tools](/docs/v5/explanation/tools) for use in function calling. It automatically instruments function calls and creates Tool Logs on Humanloop.
Calling a decorated function will create a Tool Log with the following fields:
- `inputs`: The function arguments.
- `output`: The function return value.
- `error`: The error message if the function call fails.
Calling a decorated function will create a Tool Log with the following fields:
- `inputs`: The function arguments.
- `output`: The function return value.
- `error`: The error message if the function call fails.
### Definition
```python
@hl_client.tool(
# Required: path on Humanloop workspace for the Tool
path: str,
# Optional: additional metadata for the Tool
attributes: Optional[dict[str, Any]] = None,
# Optional: values needed to setup the Tool
setup_values: Optional[dict[str, Any]] = None
)
def function(*args, **kwargs): ...
```
The decorated function will have the same signature as the original function and will have a `json_schema` attribute containing the inferred JSON Schema.
```typescript
hlClient.tool({
// Required: path on Humanloop workspace for the Tool
path: string,
// Required: decorated function
callable: I extends Record ?
(args: I) => O :
() => O,
// Required: JSON Schema for the Tool
version: ToolKernelRequest
}) => Promise
```
The decorated function is always async and has the same signature as the `callable` argument. It will have a `jsonSchema` attribute containing the provided JSON Schema.
### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `path` | string | Yes | Path on Humanloop workspace for the Tool |
| `attributes` | object | No | Additional metadata for the Tool (Python only) |
| `setup_values` | object | No | Values needed to setup the Tool (Python only) |
| `version` | ToolKernelRequest | Yes | JSON Schema for the Tool (TypeScript only) |
### Usage
```python
@hl_client.tool(path="MyFeature/Calculator")
def calculator(a: int, b: Optional[int] = None) -> int:
"""Add two numbers together."""
return a + (b or 0)
```
Decorating a function will set a `json_schema` attribute that can be used for function calling.
```python {5, 12-14}
# Use with prompts.call
response = hl_client.prompts.call(
path="MyFeature/Assistant",
messages=[{"role": "user", "content": "What is 5 + 3?"}],
tools=[calculator.json_schema]
)
# Or with OpenAI directly!
response = openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is 5 + 3?"}],
tools=[{
"type": "function",
"function": calculator.json_schema
}]
)
```
```typescript maxLines=50
const calculator = hlClient.tool({
path: "MyFeature/Calculator",
callable: (inputs: { a: number; b?: number }) => {
return inputs.a + (inputs.b || 0);
},
version: {
function: {
name: "calculator",
description: "Add two numbers together.",
parameters: {
type: "object",
properties: {
a: { type: "number" },
b: { type: "number" }
},
required: ["a"]
}
}
}
});
```
Decorating a function will set a `jsonSchema` attribute that can be used for function calling.
```typescript {5, 12-14}
// Use with prompts.call
const response = await hlClient.prompts.call({
path: "MyFeature/Assistant",
messages: [{ role: "user", content: "What is 5 + 3?" }],
tools: [calculator.jsonSchema]
});
// Or with OpenAI directly!
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "What is 5 + 3?" }],
tools: [{
type: "function",
function: calculator.jsonSchema
}]
});
```
## Behavior
### Schema Definition
In Python, the decorator automatically infers a JSON Schema from the source code, argument signature, and docstrings:
- Function name becomes the tool name
- Function docstring becomes the tool description
- Parameter type hints are converted to JSON Schema types
- Optional parameters (using `Optional[T]` or `T | None`) are marked as not required
- Return type is not included in the schema
Supported type hints:
| Python Type | JSON Schema Type |
|-------------|------------------|
| `str` | `"string"` |
| `int` | `"integer"` |
| `float` | `"number"` |
| `bool` | `"boolean"` |
| `list[T]` | `"array"` with items of type T |
| `dict[K, V]` | `"object"` with properties of types K and V |
| `tuple[T1, T2, ...]` | `"array"` with items of specific types |
| `Optional[T]` or `T \| None` | Type T with `"null"` added |
| `Union[T1, T2, ...]` | `"anyOf"` with types T1, T2, etc. |
| No type hint | `any` |
In TypeScript, you must provide a JSON Schema in the `version` parameter:
```typescript
version: {
function: {
name: string;
description: string;
parameters: {
type: "object";
properties: Record;
required?: string[];
};
};
attributes?: Record;
setup_values?: Record;
}
```
### Log Creation
Each function call creates a Tool Log with the following fields:
| Field | Type | Description |
|-------|------|-------------|
| `inputs` | dict[str, Any] | Function arguments |
| `output` | string | JSON-serialized return value |
| `error` | string | Error message if the function call fails |
| Field | Type | Description |
|-------|------|-------------|
| `inputs` | object | Function arguments |
| `output` | string | JSON-serialized return value |
| `error` | string | Error message if the function call fails |
## Error Handling
- Function errors are caught and logged in the Log's `error` field.
- The decorated function returns `None` when an error occurs.
- `HumanloopRuntimeError` is not caught and will be re-raised, as it indicates incorrect SDK or decorator usage.
- Function errors are caught and logged in the Log's `error` field.
- The decorated function returns `undefined` when an error occurs.
- Schema validation errors are thrown if the inputs don't match the schema.
- `HumanloopRuntimeError` is not caught and will be re-thrown, as it indicates incorrect SDK or decorator usage.
## Best Practices
1. Use clear and descriptive docstrings in Python to provide good tool descriptions
2. Ensure all function parameters have appropriate type hints in Python
3. Make return values JSON-serializable
4. Use the `json_schema` attribute when passing the tool to `prompts.call()`
1. Use clear and descriptive docstrings in TypeScript to provide good tool descriptions
2. Ensure all function parameters have appropriate type hints in TypeScript
3. Make return values JSON-serializable
4. Use the `jsonSchema` attribute when passing the tool to `prompts.call()`
## Related Documentation
For a deeper understanding of Tools and their role in the Humanloop platform, refer to our [Tools](/docs/v5/explanation/tools) documentation.
For attaching a Tool to a Prompt, see [Tool calling in Editor](/docs/v5/guides/prompts/tool-calling-editor) and [linking a Tool to a Prompt](/docs/v5/guides/prompts/link-tool).